Search

US-12620497-B2 - Disease feature recognition in diagnostic images and disease progression prediction

US12620497B2US 12620497 B2US12620497 B2US 12620497B2US-12620497-B2

Abstract

The present disclosure describes systems configured to recognize indicators of a medical condition within a diagnostic image and predict the progression of the medical condition based on the recognized indicators. The systems can include neural networks trained to extract disease features from diagnostic images and neural networks configured to model the progression of such features at future time points selectable by a user. Modeling the progression may involve factoring in various treatment options and patient-specific information. The predicted outcomes can be displayed on a user interface customized to specific representations of the predicted outcomes generated by one or more of the underlying neural networks. Representations of the predicted outcomes include synthesized future images, probabilities of clinical outcomes, and/or descriptors of disease features that may be likely to develop over time.

Inventors

  • Man M Nguyen
  • Jochen Kruecker
  • Raghavendra Srinivasa Naidu
  • Haibo Wang

Assignees

  • KONINKLIJKE PHILIPS N.V.

Dates

Publication Date
20260505
Application Date
20211207

Claims (14)

  1. 1 . A disease prediction system comprising: one or more processors in communication with an image acquisition device and configured to: transform, with a first neural network, at least one image of a target region within a patient, to extract a disease feature from the image that was input to the first neural network to produce a disease feature output; input the disease feature output to a second neural network, different from the first neural network, in response to a user input; and transform, with the second neural network, the disease feature output, where the second neural network is configured to output a predicted outcome indication of the disease feature at a future time point, wherein the second neural network is selected from a plurality of policy neural networks comprising a first policy neural network configured to generate a synthesized image of the disease feature and a second policy neural network configured to generate a list of disease descriptors.
  2. 2 . The disease prediction system of claim 1 , wherein the user input comprises a selection of the synthesized image of the diseased feature, or the list of future disease features.
  3. 3 . The disease prediction system of claim 1 , wherein the user input comprises a treatment option, patient-specific information, or both.
  4. 4 . The disease prediction system of claim 1 , wherein the disease feature comprises a tumor, a lesion, an abnormal vascularization, or a combination thereof.
  5. 5 . The disease prediction system of claim 1 , further comprising: a graphical user interface configured to receive the user input and display the predicted outcome indication of the disease feature at the future time point, wherein the image acquisition device is configured to generate the at least one image of the target region within the patient, and wherein the image acquisition system comprises an ultrasound system, an MRI system, or a CT system.
  6. 6 . The disease prediction system of claim 1 , wherein the future time point is selectable by a user and is between one week and one year from a current date.
  7. 7 . The disease prediction system of claim 1 , wherein the first neural network is operatively associated with a training algorithm configured to receive an array of training inputs and known outputs, wherein the training inputs comprise a longitudinal sample of images obtained from patients having a medical condition, and the known outputs comprise images of the disease feature.
  8. 8 . The disease prediction system of claim 1 , wherein the second neural network is operatively associated with a training algorithm configured to receive a second array of training inputs and known outputs, wherein the training inputs comprise the disease feature and the known outputs comprise the predicted outcome.
  9. 9 . A method of disease prediction, the method comprising: generating at least one image of a target region within a patient; applying a first neural network to the image, the first neural network configured to extract a disease feature from the image to produce a disease feature output; inputting the disease feature output to a second neural network, different from the first neural network, in response to a user input; applying the second neural network to the disease feature, the second neural network configured to generate a predicted outcome of the disease feature at a future time point; and displaying the predicted outcome generated by the second neural network, wherein the second neural network is selected from a plurality of policy neural networks comprising a first policy neural network configured to generate a synthesized image of the disease feature and a second policy neural network configured to generate a list of disease descriptors.
  10. 10 . The method of claim 9 , wherein the user input comprises a selection of the synthesized image of the diseased feature, or the list disease descriptors.
  11. 11 . The method of claim 9 , wherein the user input comprises a treatment option, patient-specific information, or both.
  12. 12 . The method of claim 9 , wherein the disease feature comprises a tumor, a lesion, an abnormal vascularization, or a combination thereof.
  13. 13 . The method of claim 9 , wherein generating the at least one image of the target region within the patient comprises acquiring ultrasound echoes generated in response to ultrasound pulses transmitted at the target region.
  14. 14 . A non-transitory computer-readable medium comprising executable instructions, which when executed, cause a processor of a disease progression prediction system to: apply a first neural network to at least one image of a target region within a patient, the first neural network configured to extract a disease feature from an image to produce a disease feature output; input the disease feature output to a second neural network, different from the first neural network, in response to a user input; and apply the second neural network to the disease feature, the second neural network configured to generate a predicted outcome of the disease feature at a future time point, wherein the second neural network is selected from a plurality of policy neural networks comprising a first policy neural network configured to generate a synthesized image of the disease feature er and a second policy neural network configured to generate a list of disease descriptors.

Description

CROSS-REFERENCE TO PRIOR APPLICATIONS This application is the U.S. National Phase application under 35 U.S.C. § 371 of International Application No. PCT/EP2021/084493, filed on Dec. 7, 2021, which claims the benefit of U.S. Provisional Patent Application No. 63/122,558, filed on Dec. 8, 2020. These applications are hereby incorporated by reference herein. TECHNICAL FIELD The present disclosure pertains to systems and methods for diagnosing and predicting the progression of various medical conditions. Particular implementations include systems configured to identify a disease feature in a patient image and predict the future progression of the feature with or without treatment using at least one neural network communicatively coupled with a graphical user interface. BACKGROUND Early detection and diagnosis is a critical first step for determining and quickly administering the best mode of treatment for a variety of medical conditions. For example, the likelihood of survival for a cancer patient is much greater if the disease is diagnosed when still confined to its original organ. Survival rates decline significantly thereafter, as tumors quickly grow and metastasize. Despite significant advancements in medical imaging modalities, clinically-relevant features imperative for early diagnosis are often missed or underestimated by clinicians during patient examination, even when such features are captured in at least one diagnostic image. This type of error, estimated to occur at a rate of 42%, significantly impedes the accuracy and reliability of diagnostic radiology. Unnoticed features not clearly present at the time of the first image acquisition typically develop and become more noticeable during follow-up imaging sessions, but at that point the prognosis may be much worse. Improved technologies are therefore needed to identify imaged disease features earlier and more accurately than preexisting systems. SUMMARY The present disclosure describes systems and methods for more quickly and accurately diagnosing a patient with a medical condition and predicting the future progression of the condition in response to various treatments. Systems disclosed herein can include or be communicatively coupled with at least one image acquisition system configured to image a patient. Systems can also include at least one graphical user interface configured to display an image generated by the image acquisition system. The graphical user interface can also activate and display the output of one or more neural networks configured to receive and process the image in a manner specified via user input. Input received or obtained at the graphical user interface can include patient-specific characteristics and selectable treatment options. Output displayed on the user interface can include a synthesized image of a disease feature at a user-specified future time point, a probability of one or more clinical outcomes, and/or signs of disease progression. Training of the neural networks utilized to generate these outputs may be tailored to the objectives of the user and the information specific to the patient. In accordance with some examples, a disease prediction system, which may be ultrasound-based, may include an image acquisition device configured to generate at least one image of a target region within a patient. The system can also include one or more processors in communication with the image acquisition device. The one or more processors can be configured to apply a first neural network to the image, the first neural network configured to extract a disease feature from the image to produce a disease feature output. The processors can also be configured to input the disease feature output to a second neural network, different from the first neural network, in response to a user input. The processors can also be configured to apply the second neural network to the disease feature, the second neural network configured to generate a predicted outcome of the disease feature at a future time point. The system can also include a graphical user interface configured to receive the user input and display the predicted outcome generated by the second neural network. In some examples, the second neural network is selected from a plurality of neural networks. In some embodiments, each of the plurality of neural networks is configured to generate a unique representation of the predicted outcome. In some examples, the unique representation of the predicted outcome comprises a synthesized image of the diseased feature, a probability of at least one clinical outcome, or a list of disease descriptors. In some embodiments, the user input comprises a selection of the synthesized image of the diseased feature, the probability of a clinical outcome, or the list of future disease features. In some examples, the user input comprises a treatment option, patient-specific information, or both. In some embodiments, the disease feature comprises a tumor, a