Search

CN-122004924-A - Ultrasonic image processing method, ultrasonic device and storage medium

CN122004924ACN 122004924 ACN122004924 ACN 122004924ACN-122004924-A

Abstract

The embodiment of the invention provides an ultrasonic image processing method, ultrasonic equipment and a storage medium. The processing method comprises the steps of acquiring a target ultrasonic image sequence of a target object. And inputting the target ultrasonic image sequence into the trained target detection model to obtain and display at least one target area in the last target ultrasonic image in the target ultrasonic image sequence. The target detection model after training in the scheme is obtained by training based on a plurality of training ultrasonic image sequences, a training dynamic change degree sequence corresponding to each training ultrasonic image sequence in the plurality of training ultrasonic image sequences and at least one training area in the last training ultrasonic image in each training ultrasonic image sequence. Therefore, the trained target detection model can be combined with dynamic information changing along with time in the target ultrasonic image and static visual characteristics of the target ultrasonic image to determine a target area with higher accuracy, especially in a scene with dynamic change of part of lesion tissues.

Inventors

  • Jiang Daimin
  • ZHOU GUOYI

Assignees

  • 深圳开立生物医疗科技股份有限公司

Dates

Publication Date
20260512
Application Date
20260413

Claims (12)

  1. 1. A method of processing an ultrasound image, the method comprising: acquiring a target ultrasonic image sequence of a target object, wherein the target ultrasonic image sequence comprises target ultrasonic images of the target object which are acquired sequentially; The target ultrasonic image sequence is input into a trained target detection model to obtain and display at least one target area in a last target ultrasonic image in the target ultrasonic image sequence, wherein the trained target detection model is obtained by training based on a plurality of training ultrasonic image sequences, a training dynamic change degree sequence corresponding to each training ultrasonic image sequence in the plurality of training ultrasonic image sequences and at least one training area in the last training ultrasonic image in each training ultrasonic image sequence, the training dynamic change degree sequence corresponding to each training ultrasonic image sequence is used for representing the dynamic change degree of an image of a label corresponding to each of a plurality of training ultrasonic images in the training ultrasonic image sequence, the target area is used for representing an image area with tissue of interest in the target ultrasonic image, and the training area is used for representing the image area with tissue of interest in the training ultrasonic image.
  2. 2. The method of claim 1, wherein the trained object detection model comprises a plurality of first feature extraction modules, a first feature fusion module coupled to each of the plurality of first feature extraction modules, an object dynamic change degree determination module coupled to the first feature fusion module, a second feature fusion module coupled to the first feature fusion module and the object dynamic change degree determination module, and a region detection module coupled to the second feature fusion module, the inputting the sequence of object ultrasound images into the trained object detection model to obtain and display at least one object region in a last object ultrasound image in the sequence of object ultrasound images comprising: Performing feature extraction processing on the target ultrasonic image input to the first feature extraction module by aiming at each first feature extraction module in the plurality of first feature extraction modules through the first feature extraction module so as to obtain a target ultrasonic image feature corresponding to the target ultrasonic image; Performing feature stitching processing on the target ultrasonic image features corresponding to each target ultrasonic image through the first feature fusion module so as to obtain first fusion features corresponding to the target ultrasonic image sequences; Determining, by the target dynamic change degree determining module, a target dynamic change degree corresponding to each target ultrasonic image based on the first fusion feature, where, for each target ultrasonic image, the target dynamic change degree corresponding to the target ultrasonic image is used to represent a change degree of image dynamic corresponding to the target ultrasonic image; Determining, by the second feature fusion module, a second fusion feature corresponding to the target ultrasound image sequence based on the target dynamic change degree and the first fusion feature corresponding to each target ultrasound image; And determining and displaying at least one target region in the last target ultrasonic image in the target ultrasonic image sequence based on the second fusion characteristic through a region detection module.
  3. 3. The method of claim 2, wherein the target dynamic change degree determining module includes a floating point displacement feature extracting module, a cyst interval deformation feature extracting module, a focus deformation feature extracting module, a third feature fusion module connected to the floating point displacement feature extracting module, the cyst interval deformation feature extracting module, the focus deformation feature extracting module, and a first dynamic change degree determining module connected to the third feature fusion module, and the determining, based on the first fusion features, a respective target dynamic change degree of each target ultrasound image includes: performing feature extraction processing on the first fusion feature through the floating point displacement feature extraction module to obtain a floating point displacement feature, wherein the floating point displacement feature is used for representing the image feature of a moving floating point in the target ultrasonic image sequence; performing feature extraction processing on the first fusion feature through the cyst interval deformation feature extraction module to obtain a cyst interval deformation feature, wherein the cyst interval deformation feature is used for representing deformed cyst interval image features in the target ultrasonic image sequence, and the receptive field of the cyst interval deformation feature extraction module is larger than that of the floating point-shaped object displacement feature extraction module; performing feature extraction processing on the first fusion feature through the focus deformation feature extraction module to obtain focus deformation features, wherein the focus deformation features are used for representing image features of deformed focuses in the target ultrasonic image sequence, and the receptive field of the focus deformation feature extraction module is larger than that of the cyst interval deformation feature extraction module; performing feature fusion processing on the floating point-like object displacement feature, the cyst interval deformation feature and the focus deformation feature through the third feature fusion module to obtain a third fusion feature; And determining the target dynamic change degree corresponding to each target ultrasonic image based on the third fusion characteristic through the first dynamic change degree determining module.
  4. 4. The method of claim 2, wherein the determining and displaying at least one target region in a last target ultrasound image in the sequence of target ultrasound images based on the second fusion feature comprises: And carrying out compression fusion processing on the second fusion characteristic in the channel dimension, and determining and displaying at least one target area in the last target ultrasonic image in the target ultrasonic image sequence based on the second fusion characteristic after the compression fusion processing.
  5. 5. The method of claim 2, wherein the method further comprises: and determining a predicted dynamic change degree sequence corresponding to the training ultrasonic image sequence through a target dynamic change degree determining module to be trained aiming at each training ultrasonic image sequence, and determining dynamic change degree loss based on the predicted dynamic change degree sequence and the training dynamic change degree sequence corresponding to the training ultrasonic image sequence, wherein the predicted dynamic change degree sequence corresponding to the training ultrasonic image sequence is used for representing the predicted dynamic change degree of the image corresponding to each training ultrasonic image in the training ultrasonic image sequence, and the dynamic change degree loss is a part of the overall loss for adjusting the model parameters of the target detection model to be trained.
  6. 6. The method of claim 2, wherein the trained object detection model is further trained based on respective ultrasound gain levels for each training ultrasound image sequence, the ultrasound gain levels for each training ultrasound image sequence being indicative of a full-image gain level for the training ultrasound image sequence annotation.
  7. 7. The method of claim 6, wherein the region detection module comprises a plurality of sequentially connected trained second feature extraction modules, a plurality of trained third feature extraction modules, a trained ultrasonic gain feature fusion module connected to each of the plurality of trained third feature extraction modules, a plurality of trained fourth feature fusion modules, a plurality of trained first detection head modules, for each of the plurality of trained third feature extraction modules, the trained third feature extraction module is connected to one trained second feature extraction module, and the trained second feature extraction module is different from a trained second feature extraction module connected to other trained third feature extraction modules, for each trained fourth feature fusion module in the plurality of trained fourth feature fusion modules, the trained fourth feature fusion module is connected to the trained ultrasonic gain feature fusion module and one trained second feature extraction module, and the trained fourth feature fusion module is connected to a trained second feature extraction module, and the trained fourth feature extraction module is connected to a trained fourth feature fusion module, and the trained fourth feature fusion module is connected to the trained fourth feature fusion module, and the trained fourth feature fusion module is not connected to the trained second feature fusion module, and the trained fourth feature fusion module connected with the trained first detection head module is different from the trained fourth feature fusion modules connected with other trained first detection head modules.
  8. 8. The method of claim 7, wherein the trained region detection module further comprises a second detection head module to be trained coupled to the ultrasound gain feature fusion module to be trained prior to completion of training, The method further comprises the steps of: Determining a gain prediction grade corresponding to a training ultrasonic image sequence based on gain training characteristics output by the to-be-trained second detection head module and based on gain training characteristics output by the to-be-trained ultrasonic gain characteristic fusion module, wherein the gain training characteristics are obtained by sequentially passing through a to-be-trained second characteristic extraction module, a to-be-trained third characteristic extraction module and a to-be-trained ultrasonic gain characteristic fusion module based on the second fusion characteristics corresponding to the training ultrasonic image sequence, and the gain prediction grade is used for representing the full-image gain degree predicted by the training ultrasonic image sequence; the method further comprises the steps of: And determining gain level loss based on the gain prediction level corresponding to the training ultrasonic image sequence and the ultrasonic gain level corresponding to the training ultrasonic image sequence, wherein the gain level loss is a part of overall loss for adjusting model parameters of the target detection model to be trained.
  9. 9. The method of claim 1, wherein the trained object detection model is further trained based on a respective region class for each training region, the region class being used to represent a disease class or a tissue name of a tissue of interest in the training region.
  10. 10. An ultrasound device comprising a memory and a processor, wherein the memory is configured to hold a computer program, and the processor is configured to execute the computer program to implement the method for processing an ultrasound image according to any one of claims 1-9.
  11. 11. A storage medium storing computer program instructions for performing, when executed, the method of processing an ultrasound image according to any one of claims 1-9.
  12. 12. A computer program product comprising computer program instructions for performing the method of processing an ultrasound image according to any of claims 1-9 when being executed by a processor.

Description

Ultrasonic image processing method, ultrasonic device and storage medium Technical Field The present invention relates to the field of medical imaging technology, and more particularly to a method of processing an ultrasound image, an ultrasound device, a storage medium and a computer program product. Background The ultrasonic inspection technology is used as a non-invasive inspection technology, and is widely applied to various clinical medical diagnosis scenes by virtue of the technical advantages of noninvasive, real-time and repeatable detection. However, in the prior art, the ultrasonic image analysis of the target object is mainly interpreted by means of manual visual inspection or empirical judgment of a user, and the inspection result of the target object is determined according to the interpretation. Because the method is highly dependent on experience and professional level of users, the subjective judgment difference of different users can cause the consistency and accuracy of the inspection result to be difficult to ensure, thereby affecting the reliability of diagnosis and the effectiveness of clinical decision. In addition, when facing a large amount of ultrasonic image data, the manual analysis efficiency is low, and the requirement of clinical rapid diagnosis is difficult to meet. Accordingly, there is a need for an objective, efficient and repeatable automated assay method that overcomes the shortcomings of the prior art. Disclosure of Invention The present invention has been made in view of the above-described problems. The invention provides a processing method of an ultrasonic image, an ultrasonic device, a storage medium and a computer program product. According to one aspect of the invention, a processing method of an ultrasonic image is provided, the processing method comprises the steps of obtaining a target ultrasonic image sequence of a target object, wherein the target ultrasonic image sequence comprises target ultrasonic images of the target object which are acquired sequentially, inputting the target ultrasonic image sequence into a trained target detection model to obtain and display at least one target area in a last target ultrasonic image in the target ultrasonic image sequence, wherein the trained target detection model is obtained by training based on a plurality of training ultrasonic image sequences, a training dynamic change degree sequence corresponding to each training ultrasonic image sequence in the plurality of training ultrasonic image sequences and at least one training area in a last training ultrasonic image in each training ultrasonic image sequence, the training dynamic change degree sequence corresponding to each training ultrasonic image sequence is used for representing the dynamic change degree of an image marked by each of the plurality of training ultrasonic images in the training ultrasonic image sequence, and the target area is used for representing an image area in which tissue of interest exists in the target ultrasonic image and the training area is used for representing the image area in which tissue of interest exists in the training ultrasonic image. The trained object detection model comprises a plurality of first feature extraction modules, a first feature fusion module connected with the plurality of first feature extraction modules, an object dynamic change degree determination module connected with the first feature fusion modules, a second feature fusion module connected with the first feature fusion modules and the object dynamic change degree determination module, and an area detection module connected with the second feature fusion modules, wherein the object ultrasonic image sequence is input into the trained object detection model so as to obtain and display at least one object area in a last object ultrasonic image in the object ultrasonic image sequence, and the method comprises the following steps: Performing feature extraction processing on the target ultrasonic image input to the first feature extraction module through the first feature extraction module aiming at each first feature extraction module in the plurality of first feature extraction modules so as to obtain a target ultrasonic image feature corresponding to the target ultrasonic image; Performing feature stitching processing on the target ultrasonic image features corresponding to each target ultrasonic image through a first feature fusion module so as to obtain first fusion features corresponding to a target ultrasonic image sequence; Determining, by a target dynamic change degree determining module, a target dynamic change degree corresponding to each target ultrasonic image based on the first fusion feature, where, for each target ultrasonic image, the target dynamic change degree corresponding to the target ultrasonic image is used to represent a change degree of image dynamic corresponding to the target ultrasonic image; Determining a second fusion feature corresp