CN-122004923-A - Ultrasonic image processing method, ultrasonic device and storage medium
Abstract
The invention provides an ultrasonic image processing method, ultrasonic equipment and a storage medium, wherein the processing method comprises the following steps: a target ultrasound image of a target object may be acquired. And inputting the target ultrasonic image into the trained target detection model to obtain and display at least one target area in the target ultrasonic image. The trained target detection model is obtained by training based on a plurality of training ultrasonic images, an ultrasonic gain level corresponding to each training ultrasonic image in the plurality of training ultrasonic images and at least one calibration area corresponding to each training ultrasonic image. The trained target detection model not only can pay attention to the information such as tissue structure morphology and the like in the ultrasonic image, but also can adaptively correct the ultrasonic image performance difference caused by different ultrasonic gain levels, thereby being beneficial to improving the accuracy and the robustness of the ultrasonic image processing method.
Inventors
- Jiang Daimin
- ZHOU GUOYI
Assignees
- 深圳开立生物医疗科技股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260413
Claims (14)
- 1. A method of processing an ultrasound image, the method comprising: acquiring a target ultrasonic image of a target object; And inputting the target ultrasonic image into a trained target detection model to obtain and display at least one target area in the target ultrasonic image, wherein the trained target detection model is obtained by training based on a plurality of training ultrasonic images, an ultrasonic gain level corresponding to each training ultrasonic image in the plurality of training ultrasonic images and at least one calibration area corresponding to each training ultrasonic image, and for each training ultrasonic image, the ultrasonic gain level corresponding to the training ultrasonic image is used for representing the full-image gain degree of the labeling of the training ultrasonic image, the calibration area corresponding to the training ultrasonic image is used for representing the image area with the tissue of interest in the training ultrasonic image, and the target area is used for representing the image area with the tissue of interest in the target ultrasonic image.
- 2. The method of claim 1, wherein the trained target detection model comprises a plurality of sequentially connected trained first feature extraction modules, a plurality of trained second feature extraction modules, a trained ultrasonic gain feature fusion module connected to each of the plurality of trained second feature extraction modules, a plurality of trained first feature fusion modules, a plurality of trained first detection head modules, a second feature extraction module connected to one of the trained first feature extraction modules for each of the plurality of trained second feature extraction modules, and the trained first feature extraction module is different from a trained first feature extraction module connected to the other trained second feature extraction modules, the trained first feature fusion module is respectively connected to the trained ultrasonic gain feature fusion module and one of the trained first feature extraction modules for each of the plurality of trained first feature fusion modules, and the trained first feature fusion module is connected to a first post-training feature extraction module for each of the trained second feature extraction modules, and the trained first feature extraction module is not connected to the trained first feature fusion module of the first post-training module for each of the trained first feature extraction modules, and the trained first feature fusion module connected with the trained first detection head module is different from the trained first feature fusion modules connected with other trained first detection head modules.
- 3. The method of claim 2, wherein the method further comprises: For each trained first feature extraction module in the plurality of trained first feature extraction modules which are sequentially connected, performing feature extraction processing on the first image features corresponding to the target ultrasonic image or the first image features corresponding to the first feature extraction modules of which the output ends are connected with the input ends of the trained first feature extraction modules through the trained first feature extraction modules so as to obtain first image features corresponding to the trained first feature extraction modules; Aiming at each trained second feature extraction module in the plurality of trained second feature extraction modules, carrying out feature extraction processing on first image features corresponding to a trained first feature extraction module connected with the trained second feature extraction module through the trained second feature extraction module so as to obtain second image features corresponding to the trained second feature extraction module; Performing feature fusion processing by the trained ultrasonic gain feature fusion module based on the second image features corresponding to the plurality of trained second feature extraction modules so as to obtain ultrasonic gain features; Aiming at each trained first feature fusion module in the plurality of trained first feature fusion modules, carrying out feature fusion processing by the trained first feature fusion module based on the first image feature corresponding to the trained first feature extraction module connected with the trained first feature fusion module and the ultrasonic gain feature to obtain a first fusion feature corresponding to the trained first feature fusion module; And determining at least one target region in the target ultrasonic image by the trained first detection head module based on the first fusion feature corresponding to the trained first feature fusion module connected with the trained first detection head module aiming at each trained first detection head module in the plurality of trained first detection head modules.
- 4. The method of claim 3, wherein the performing feature fusion processing based on the second image features corresponding to each of the plurality of trained second feature extraction modules to obtain the ultrasound gain feature comprises: Aiming at each second image feature in the second image features corresponding to the plurality of trained second feature extraction modules, performing compression excitation processing on the second image features to obtain first compression excitation features corresponding to the second image features; Performing feature fusion processing on the first compression excitation features corresponding to each second image feature to obtain second fusion features; and performing compression excitation processing on the second fusion characteristic to obtain an ultrasonic gain characteristic.
- 5. The method of claim 2, wherein the plurality of trained second feature extraction modules includes a trained echo detail extraction module, and the trained echo detail extraction module is configured to perform at least one feature extraction process on the first image feature output by the trained first feature extraction module connected to the trained echo detail extraction module through a transverse convolution kernel, and a longitudinal convolution kernel, so as to obtain a second image feature corresponding to the trained echo detail extraction module, where a transverse dimension of the transverse convolution kernel is greater than a longitudinal dimension, and a longitudinal dimension of the longitudinal convolution kernel is greater than the transverse dimension.
- 6. The method of claim 2, wherein the plurality of trained second feature extraction modules includes a trained contrast feature extraction module for performing feature extraction processing on the first image features output by the trained first feature extraction module connected to the trained contrast feature extraction module by at least one hole convolution check to obtain second image features corresponding to the trained contrast feature extraction module.
- 7. The method of claim 6, wherein the plurality of trained second feature extraction modules comprises a trained luminance feature extraction module, the trained luminance feature extraction module is configured to perform feature extraction processing on first image features output by a trained first feature extraction module connected to the trained luminance feature extraction module through a plurality of hole convolution kernels, so as to obtain second image features corresponding to the trained luminance feature extraction module, and a total number of hole convolution kernels included in the trained luminance feature extraction module is greater than a total number of hole convolution kernels included in the trained contrast feature extraction module.
- 8. The method of claim 2, wherein, for each trained first feature fusion module of the plurality of trained first feature fusion modules, the trained first feature fusion module is connected to the trained first feature extraction module through a trained feature pyramid fusion module, and the trained feature pyramid fusion module is configured to perform feature fusion processing on a first image feature output by the trained first feature extraction module and a first image feature output by a trained first feature extraction module adjacent to the trained first feature extraction module to obtain a third fusion feature, where the third fusion feature is used to perform feature fusion processing on an ultrasonic gain feature output by the trained first feature fusion module to obtain a first fusion feature corresponding to the trained first feature fusion module.
- 9. The method of claim 2, wherein the trained object detection model further comprises a second detection head module to be trained connected with the ultrasound gain feature fusion module to be trained, The method further comprises the steps of: Determining a gain prediction grade corresponding to a training ultrasonic image based on gain training characteristics through the second detection head module to be trained, wherein the gain training characteristics are obtained through a first characteristic extraction module to be trained, a second characteristic extraction module to be trained and an ultrasonic gain characteristic fusion module to be trained in sequence based on the training ultrasonic image, and the gain prediction grade is used for representing the full-image gain degree predicted by the training ultrasonic image; the method further comprises the steps of: and determining gain level loss based on the gain prediction level corresponding to the training ultrasonic image and the ultrasonic gain level corresponding to the training ultrasonic image, wherein the gain level loss is a part of overall loss for adjusting model parameters of the target detection model to be trained.
- 10. The method of claim 9, wherein the ultrasound gain level or the gain prediction level is any one of a sequence of lower gain level, medium gain level, and higher gain level in order from low to high in the overall map gain level, the determining the gain level loss based on the gain prediction level corresponding to the training ultrasound image and the ultrasound gain level corresponding to the training ultrasound image comprises: determining punishment weights based on the difference between the gain prediction grade corresponding to the training ultrasonic image and the ultrasonic gain grade corresponding to the training ultrasonic image, wherein the punishment weights are sequentially from small to large, namely punishment weights in the case that the gain prediction grade is the same as the ultrasonic gain grade, punishment weights in the case that the gain prediction grade is adjacent to the ultrasonic gain grade in the gain grade sequence, and punishment weights in the case that the gain prediction grade is different from the ultrasonic gain grade in the gain grade sequence and are not adjacent to the ultrasonic gain grade; and determining gain grade loss based on the punishment weight and the confidence corresponding to the ultrasonic gain grade.
- 11. The method of claim 2, wherein the method further comprises at least one of: freezing respective model parameters of a plurality of second feature extraction modules to be trained and model parameters of an ultrasonic gain feature fusion module to be trained under the condition that the iteration times of a target detection model to be trained are smaller than or equal to a first preset times, and training at a first learning rate; under the condition that the iteration times of the target detection model to be trained are larger than the first preset times and smaller than or equal to the second preset times, thawing the model parameters of each of a plurality of second feature extraction modules to be trained and the model parameters of an ultrasonic gain feature fusion module to be trained, freezing the model parameters of each of a plurality of first feature extraction modules to be trained, the model parameters of each of a plurality of first feature fusion modules to be trained and the model parameters of each of a plurality of first detection head modules to be trained, and training at a second learning rate, wherein the second learning rate is smaller than the first learning rate; Under the condition that the iteration times of the target detection model to be trained are larger than the second preset times, the model parameters of the first feature extraction modules to be trained, the model parameters of the first feature fusion modules to be trained and the model parameters of the first detection head modules to be trained are unfrozen, and training is carried out at a third learning rate, wherein the third learning rate is smaller than the second learning rate.
- 12. An ultrasound device comprising a memory and a processor, wherein the memory is configured to hold a computer program, and the processor is configured to execute the computer program to implement the method for processing an ultrasound image according to any one of claims 1-11.
- 13. A storage medium storing computer program instructions which, when executed, are adapted to carry out the method of processing an ultrasound image according to any one of claims 1-11.
- 14. A computer program product comprising computer program instructions for performing the method of processing an ultrasound image according to any of claims 1-11 when being executed by a processor.
Description
Ultrasonic image processing method, ultrasonic device and storage medium Technical Field The present invention relates to the field of medical imaging technology, and more particularly to a method of processing an ultrasound image, an ultrasound device, a storage medium and a computer program product. Background The ultrasonic inspection technology is used as a non-invasive inspection technology, and is widely applied to various clinical medical diagnosis scenes by virtue of the technical advantages of noninvasive, real-time and repeatable detection. However, in the prior art, the ultrasonic image analysis of the target object is mainly interpreted by means of manual visual inspection or empirical judgment of a user, and the inspection result of the target object is determined according to the interpretation. Because the method is highly dependent on experience and professional level of users, the subjective judgment difference of different users can cause the consistency and accuracy of the inspection result to be difficult to ensure, thereby affecting the reliability of diagnosis and the effectiveness of clinical decision. In addition, when facing a large amount of ultrasonic image data, the manual analysis efficiency is low, and the requirement of clinical rapid diagnosis is difficult to meet. Accordingly, there is a need for an objective, efficient and repeatable automated assay method that overcomes the shortcomings of the prior art. Disclosure of Invention The present invention has been made in view of the above-described problems. The invention provides a processing method of an ultrasonic image, an ultrasonic device, a storage medium and a computer program product. According to one aspect of the invention, a processing method of an ultrasonic image is provided, the processing method comprises the steps of obtaining a target ultrasonic image of a target object, inputting the target ultrasonic image into a trained target detection model to obtain and display at least one target area in the target ultrasonic image, wherein the trained target detection model is obtained by training based on a plurality of training ultrasonic images, an ultrasonic gain level corresponding to each training ultrasonic image in the plurality of training ultrasonic images and at least one calibration area corresponding to each training ultrasonic image, the ultrasonic gain level corresponding to each training ultrasonic image is used for representing the full image gain level marked by the training ultrasonic image, the calibration area corresponding to the training ultrasonic image is used for representing an image area in which tissue of interest exists in the training ultrasonic image, and the target area is used for representing the image area in which tissue of interest exists in the target ultrasonic image. The trained target detection model comprises a plurality of trained first feature extraction modules, a plurality of trained second feature extraction modules, a trained first feature fusion module, a plurality of trained first feature fusion modules and a plurality of trained first detection head modules, wherein the trained first feature extraction modules are sequentially connected, the trained second feature extraction modules are connected with the trained first feature extraction modules, the trained first feature extraction modules are different from the trained first feature extraction modules connected with the other trained second feature extraction modules, the trained first feature fusion modules are respectively connected with the trained first feature fusion modules and the trained first feature extraction modules, the trained first feature fusion modules are connected with the trained first feature fusion modules, and the trained first feature fusion modules are respectively connected with the trained first feature extraction modules, and the trained first feature fusion modules are connected with the first detection head modules. Illustratively, the processing method further comprises: for each trained first feature extraction module in a plurality of sequentially connected trained first feature extraction modules, performing feature extraction processing on a target ultrasonic image or a first image feature corresponding to a first feature extraction module of which the output end is connected with the input end of the trained first feature extraction module through the trained first feature extraction module so as to obtain a first image feature corresponding to the trained first feature extraction module; Aiming at each trained second feature extraction module in a plurality of trained second feature extraction modules, performing feature extraction processing on first image features corresponding to trained first feature extraction modules connected with the trained second feature extraction modules through the trained second feature extraction modules so as to obtain second image features corres