US-20260123918-A1 - QUANTITATIVE ULTRASOUND IMAGING METHOD AND APPARATUS USING LIGHTWEIGHT NEURAL NETWORK
Abstract
An operating method of an imaging apparatus operated by at least one processor includes receiving ultrasound data of a tissue and generating a quantitative image representing a distribution of quantitative variables in the tissue from the ultrasound data using a lightweight neural network trained upon receiving knowledge of a teacher neural network.
Inventors
- Hyeon-Min Bae
- Seokhwan Oh
- Youngmin Kim
- Guil JUNG
- Hyeonjik LEE
- Myeong-Gee KIM
Assignees
- BARRELEYE INC
Dates
- Publication Date
- 20260507
- Application Date
- 20231006
- Priority Date
- 20221031
Claims (19)
- 1 . An operating method of an imaging apparatus operated by at least one processor, the operating method comprising: receiving ultrasound data of a tissue and generating a quantitative image representing a distribution of quantitative variables in the tissue from the ultrasound data using a lightweight neural network trained upon receiving knowledge of a teacher neural network.
- 2 . The operating method of claim 1 , wherein the lightweight neural network is configured to extract quantitative features from the ultrasound data using multi-stage separable convolution and reconstruct the quantitative features to output the quantitative image.
- 3 . The operating method of claim 1 , wherein the lightweight neural network becomes lightweight through neural network parameter quantization and is configured to extract quantitative features from the ultrasound data and reconstruct the quantitative features to output the quantitative image.
- 4 . The operating method of claim 1 , wherein the lightweight neural network is an artificial intelligence model trained using knowledge for feature map extraction and knowledge for quantitative image restoration, which are received from the teacher neural network.
- 5 . The operating method of claim 4 , wherein the lightweight neural network is an artificial intelligence model trained by using an objective function including a first loss related to a difference from a correct image, a second loss related to a difference from a feature map extracted from the teacher neural network, and a third loss related to a difference from the quantitative image generated from the teacher neural network.
- 6 . The operating method of claim 1 , wherein the quantitative variables include at least one of attenuation coefficient (AC), speed of sound (SoS), effective scatterer concentration (ESC), and effective scatterer diameter (ESD).
- 7 . The operating method of claim 1 , wherein the imaging apparatus is a mobile device.
- 8 . An imaging apparatus comprising: a memory; and a processor configured to execute instructions stored in the memory, wherein the processor is configured to generate a quantitative image representing a distribution of quantitative variables in a tissue from ultrasound data of the tissue using a lightweight neural network trained upon receiving knowledge from a teacher neural network.
- 9 . The imaging apparatus of claim 8 , wherein the lightweight neural network is configured to extract quantitative features from the ultrasound data using multi-stage separable convolution and reconstruct the quantitative features to output the quantitative image.
- 10 . The imaging apparatus of claim 8 , wherein the lightweight neural network becomes lightweight through neural network parameter quantization and is configured to extract quantitative features from the ultrasound data and reconstruct the quantitative features to output the quantitative image.
- 11 . The imaging apparatus of claim 8 , wherein the lightweight neural network is an artificial intelligence model trained upon receiving knowledge for feature map extraction and knowledge for quantitative image restoration from the teacher neural network.
- 12 . The imaging apparatus of claim 11 , wherein the lightweight neural network is an artificial intelligence model trained using an objective function including a first loss related to a difference from a correct image, a second loss related to a difference from a feature map extracted from the teacher neural network, and a third loss related to a difference from the quantitative image generated from the teacher neural network.
- 13 . The imaging apparatus of claim 8 , wherein the quantitative variables include at least one of attenuation coefficient (AC), speed of sound (SoS), effective scatterer concentration (ESC), and effective scatterer diameter (ESD).
- 14 . The imaging apparatus of claim 8 , wherein the imaging apparatus is a mobile device.
- 15 . A computer program including instructions stored in a computer-readable storage medium and executed by a processor, wherein the computer program includes instructions executing an encoder configured to receive ultrasound data of a tissue and extract a quantitative feature map from the ultrasound data, and a decoder configured to reconstruct a quantitative image representing a distribution of quantitative variables in the tissue from the quantitative feature map, wherein the encoder and the decoder are lightweight neural networks trained using feature map extraction knowledge and image reconstruction knowledge transmitted from a teacher neural network.
- 16 . The imaging apparatus of claim 15 , wherein the encoder is a model configured to extract quantitative features from the ultrasound data using multi-stage separable convolution.
- 17 . The imaging apparatus of claim 15 , wherein the encoder and the decoder become lightweight through neural network parameter quantization and are configured to extract quantitative features from the ultrasound data and reconstruct the quantitative features to output the quantitative image.
- 18 . The imaging apparatus of claim 15 , wherein the encoder and the decoder are artificial intelligence model trained using an objective function including a first loss related to a difference from a correct image, a second loss related to a difference from a feature map extracted from the teacher neural network, and a third loss related to a difference from the quantitative image generated from the teacher neural network.
- 19 . The imaging apparatus of claim 15 , wherein the quantitative variables include at least one of attenuation coefficient (AC), speed of sound (SoS), effective scatterer concentration (ESC), and effective scatterer diameter (ESD).
Description
TECHNICAL FIELD The disclosure relates to ultrasound imaging. BACKGROUND ART Cancer is challenging to detect in its early stages, necessitating periodic diagnosis and continuous monitoring of lesion size and characteristics. Common imaging modalities for this purpose include X-ray, magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound. While X-ray, MRI, and CT have the disadvantages such as radiation exposure risk, long scan times, and high costs, the ultrasound imaging is safe, relatively inexpensive, and capable of providing real-time images, allowing users to monitor lesions in real time and obtain desired images. Currently, the most commercialized ultrasound imaging equipment is a brightness mode (B-mode) imaging system. B-mode imaging identifies the location and size of an object through measuring the time and the strength for which ultrasound waves are reflected and returned from a surface of the object. Since this method finds a location of the lesion in real time, users may efficiently obtain desired images while monitoring lesions in real time, and this method is safe, relatively inexpensive, and has good accessibility. However, there are disadvantages in that image quality is not maintained consistently depending on the user's skill level, and quantitative characteristics cannot be imaged. In other words, since the B-mode technique provides only structural information of tissues, the sensitivity and specificity may be low in a differential diagnosis of benign and malignant tumors distinguished by histological characteristics. Recently, research has been conducted to reconstruct biomechanical characteristics in real time through quantitative ultrasound imaging. However, since a neural network performing quantitative ultrasound imaging requires extensive parallel calculation, it is not easy to apply the neural network to existing ultrasound imaging devices. This challenge is particularly pronounced in mobile ultrasound imaging devices, which has limited computational resources, restricting the feasibility of applying quantitative ultrasound imaging. DISCLOSURE Technical Problem The disclosure attempts to provide a quantitative ultrasound imaging method and apparatus using a lightweight neural network. The disclosure also attempts to provide a lightweight neural network through knowledge distillation and/or neural network parameter quantization. Technical Solution According to an exemplary embodiment, an operating method of an imaging apparatus operated by at least one processor includes: receiving ultrasound data of a tissue and generating a quantitative image representing a distribution of quantitative variables in the tissue from the ultrasound data using a lightweight neural network trained upon receiving knowledge of a teacher neural network. The lightweight neural network may be configured to extract quantitative features from the ultrasound data using multi-stage separable convolution and to reconstruct the quantitative features to output the quantitative image. The lightweight neural network may become lightweight through neural network parameter quantization and be configured to extract quantitative features from the ultrasound data and to reconstruct the quantitative features to output the quantitative image. The lightweight neural network may be an artificial intelligence model trained using knowledge for feature map extraction and knowledge for quantitative image restoration, which are received from the teacher neural network. The lightweight neural network may be an artificial intelligence model trained by using an objective function including a first loss related to a difference from a correct image, a second loss related to a difference from a feature map extracted from the teacher neural network, and a third loss related to a difference from the quantitative image generated from the teacher neural network. The quantitative variables may include at least one of attenuation coefficient (AC), speed of sound (SoS), effective scatterer concentration (ESC), and effective scatterer diameter (ESD). The imaging apparatus may be a mobile device. According to an exemplary embodiment, an imaging apparatus includes: a memory; and a processor configured to execute instructions stored in the memory, wherein the processor is configured to generate a quantitative image representing a distribution of quantitative variables in a tissue from ultrasound data of the tissue using a lightweight neural network trained upon receiving knowledge from a teacher neural network. The lightweight neural network may be configured to extract quantitative features from the ultrasound data using multi-stage separable convolution and to reconstruct the quantitative features to output the quantitative image. The lightweight neural network may become lightweight through neural network parameter quantization and be configured to extract quantitative features from the ultrasound data and to reconstruct th