Search

US-12626494-B2 - Ultrasound image feature segmentation

US12626494B2US 12626494 B2US12626494 B2US 12626494B2US-12626494-B2

Abstract

By example, a method for training a model to segment test images, wherein the test images comprise ultrasound image data, includes: receiving, at a self-supervised learning framework, a first plurality of training images, wherein the first plurality of training images include ultrasound data corresponding to patients' livers; processing the first plurality of plurality of training images with a learning algorithm of the self-supervised learning framework, and responsively adapting a trained model; and receiving, at a supervised learning framework, the trained model and a second plurality of training images, wherein the second plurality of training images include ultrasound data corresponding to patients' livers and annotations of the livers, and responsively adapting the trained model.

Inventors

  • Abder-Rahman Ali
  • Arinc Ozturk
  • Michael Wang
  • Michael J. Washburn
  • Yelena Tsymbalenko
  • Anthony E. Samir
  • Viksit Kumar
  • Shuhang Wang
  • Theodore Pierce
  • Qian Li

Assignees

  • GE Precision Healthcare LLC
  • THE GENERAL HOSPITAL CORPORATION

Dates

Publication Date
20260512
Application Date
20230605

Claims (15)

  1. 1 . A method for segmenting structures in ultrasound image data, the method comprising: obtaining, using ultrasonic energy, ultrasound image data of a patient, including a liver; receiving, at a processor, the ultrasound image data; executing, by the processor, inference instructions to inference from a trained artificial intelligence model to segment, in the ultrasound image data, the liver in real-time to form a segmented liver, and to segment a poor-probe-contact region of an ultrasound probe with the patient's skin; and presenting, on a display, the ultrasound image data and information corresponding to the poor-probe-contact region with the ultrasound image data.
  2. 2 . The method of claim 1 , further comprising: determining a region of interest within the segmented liver; and performing shear-wave elastography on data obtained from the region of interest.
  3. 3 . A system, comprising: an ultrasound probe and receiver configured to obtain ultrasound image data of a patient, including a liver; a processor configured to receive the ultrasound image data and to execute inference instructions to inference from a trained artificial intelligence model to segment, in the ultrasound image data, the liver in real-time to form a segmented liver and to segment, in the ultrasound image data, a poor probe contact region of an ultrasound probe with the patient's skin; and a display configured to present the ultrasound image data and information associated with the segmented liver and further configured to present information corresponding to the poor-probe-contact region with the ultrasound image data.
  4. 4 . The system of claim 3 , wherein the processor is further configured to determine a region of interest within the segmented liver, and cause a shear-wave elastography process to be performed to obtain shear-wave elastography data from the region of interest.
  5. 5 . The method of claim 1 , wherein the trained artificial intelligence model is adaptively trained by a process comprising: receiving, at a self-supervised learning framework, a first plurality of training images, wherein the first plurality of training images include ultrasound data corresponding to patients' livers; processing the first plurality of training images with a learning algorithm of the self-supervised learning framework, and responsively adapting the trained artificial intelligence model; and receiving, at a supervised learning framework, the trained artificial intelligence model and a second plurality of training images, wherein the second plurality of training images include ultrasound data corresponding to patients' livers and annotations of the patients' livers, and responsively adapting the trained artificial intelligence model.
  6. 6 . The method of claim 5 , wherein the self-supervised learning framework comprises a contrastive learning framework.
  7. 7 . The method of claim 6 , wherein the self-supervised learning framework employs a convolutional neural network as an encoder.
  8. 8 . The method of claim 6 , wherein the self-supervised learning framework employs a projection head.
  9. 9 . The method of claim 5 , wherein the self-supervised learning framework comprises a SimCLR framework.
  10. 10 . The method of claim 5 , wherein the supervised learning framework comprises an ENet framework.
  11. 11 . The method of claim 5 , wherein the supervised learning framework comprises an encoder including a plurality of stages and a decoder including a plurality of stages, wherein each stage includes a plurality of bottleneck modules configured to manage dimensionality.
  12. 12 . The method of claim 5 , wherein the supervised learning framework comprises a maximum pooling layer, wherein the self-supervised framework further comprises a decoder including a maximum unpooling layer and a spatial convolution algorithm.
  13. 13 . The method of claim 5 , wherein the supervised learning framework is configured to avoid bias terms in projections.
  14. 14 . A system, comprising: an ultrasound probe and receiver configured to obtain ultrasound image data of a patient, including a liver; a processor configured to receive the ultrasound image data and to execute inference instructions to inference from a trained artificial intelligence model to segment, in the ultrasound image data, the liver in real-time to form a segmented liver; a display configured to present the ultrasound image data and information associated with the segmented liver; and wherein the trained artificial intelligence model is trained by a process comprising: receiving, at a self-supervised learning framework, a first plurality of training images, wherein the first plurality of training images include ultrasound data corresponding to patients' livers; processing the first plurality of training images with a learning algorithm of the self-supervised learning framework, and responsively adapt the trained artificial intelligence model; and receiving, at a supervised learning framework, the trained artificial intelligence model and a second plurality of training images, wherein the second plurality of training images include ultrasound data corresponding to patients' livers and annotations of the livers, and adapt adapting the trained artificial intelligence model, wherein the processor is further configured to execute inference instructions to segment, in the ultrasound image data, a poor-probe-contact region of an ultrasound probe with a patient's skin, and wherein the display is further configured to present information corresponding to the poor-probe-contact region with the ultrasound image data.
  15. 15 . The system of claim 14 , wherein the processor is further configured to determine a region of interest within the segmented liver, and cause a shear-wave elastography process to be performed to obtain shear-wave elastography data from the region of interest.

Description

CROSS REFERENCE TO RELATED APPLICATIONS [Not Applicable] BACKGROUND Generally, this application relates to ultrasound imaging and shear wave elastography. Non-alcoholic fatty liver disease (NAFLD), a cause of chronic liver disease, can be characterized or caused by the accumulation of excess fat in the liver, leading to damage and inflammation. Currently, there is upward trend in the incidence of NAFLD in the U.S., resulting in substantial medical costs. Liver biopsy can be used to diagnose NAFLD but it is invasive, relatively expensive, and may be subject to sampling error and interpretative variability. Due to these limitations, non-invasive alternatives have been developed, including ultrasound. As NAFLD progresses, liver stiffness increases, making it a biomarker. Shear wave elastography (SWE) is an ultrasound method, which can measure or estimate stiffness of liver tissue. SUMMARY According to embodiments, a method for training a model to segment test images, wherein the test images comprise ultrasound image data, includes: receiving, at a self-supervised learning framework, a first plurality of training images, wherein the first plurality of training images include ultrasound data corresponding to patients' livers; processing the first plurality of plurality of training images with a learning algorithm of the self-supervised learning framework, and responsively adapting a trained model; and receiving, at a supervised learning framework, the trained model and a second plurality of training images, wherein the second plurality of training images include ultrasound data corresponding to patients' livers and annotations of the livers, and responsively adapting the trained model. The self-supervised learning framework may include a contrastive learning framework. The self-supervised learning framework may employ a convolutional neural network as an encoder. The self-supervised learning framework may employ a projection head. The self-supervised learning framework may include a SimCLR framework. The supervised learning framework may include an ENet framework. The self-supervised learning framework may include a SimCLR framework and the supervised learning framework comprises an ENet framework. The supervised learning framework may include an encoder including a plurality of stages and a decoder including a plurality of stages, wherein each stage includes a plurality of bottleneck modules configured to manage dimensionality. The supervised learning framework may include a maximum pooling layer, wherein the supervised framework further comprises a decoder including a maximum unpooling layer and a spatial convolution algorithm. The supervised learning framework may be configured to avoid bias terms in projections. According to embodiments, a method for segmenting structures in ultrasound image data includes: obtaining, using ultrasonic energy, ultrasound image data of a patient, including a liver; receiving, at a processor, the ultrasound image data; executing, by the processor, inference instructions to segment, in the ultrasound image data, the liver in real-time to form a segmented liver; and presenting, on a display, the ultrasound image data. The method may further include: determining a region of interest within the segmented liver; and performing shear-wave elastography on data obtained from the region of interest. The region of interest may be automatically determined. The method may further include executing, by the processor, inference instructions to segment, in the ultrasound image data, a poor-probe-contact region. The method may further include presenting, on the display, information corresponding to the poor-probe-contact region with the ultrasound image data. According to embodiments, a system includes: an ultrasound probe and receiver configured to obtain ultrasound image data of a patient, including a liver; a processor configured to receive the ultrasound image data and to execute inference instructions to segment, in the ultrasound image data, the liver in real-time to form a segmented liver; and a display configured to present the ultrasound image data and information associated with the segmented liver. The processor may be further configured to determine a region of interest within the segmented liver, and cause a shear-wave elastography process to be performed to obtain shear-wave elastography data from the region of interest. The region of interest may be automatically determined. The processor may be further configured to execute inference instructions to segment, in the ultrasound image data, a poor-probe-contact region. The display may be further configured to present information corresponding to the poor-probe-contact region with the ultrasound image data. BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS FIG. 1 illustrates an ultrasound system, according to embodiments. FIG. 2 illustrates an ultrasound system for performing shear-wave elastography, according to embodiments. FIG.