Search

US-12620484-B2 - Machine-learning techniques for oxygen therapy prediction using medical imaging data and clinical metadata

US12620484B2US 12620484 B2US12620484 B2US 12620484B2US-12620484-B2

Abstract

Apparatuses, systems, and techniques to train one or more neural networks based, at least in part on, medical imaging data and clinical metadata or inference using one or more neural networks trained as such. In at least one embodiment, one or more circuits to train one or more neural network to predict a treatment for a patient suspected to have or confirmed to have COVID-19 based, at least in part on, medical imaging data and clinical metadata.

Inventors

  • Wentao Zhu
  • Daguang Xu
  • Peiying Ruan
  • Dong Yang
  • Ziyue Xu
  • Holger Reinhard Roth

Assignees

  • NVIDIA CORPORATION

Dates

Publication Date
20260505
Application Date
20200824

Claims (20)

  1. 1 . One or more processors, comprising: circuitry to: receive medical imaging data representing one or more CT scan images; generate one or more segmented images, using one or more neural networks to identify areas of interest in the medical imaging data; extract one or more features from the one or more segmented images using a first portion of the one or more neural networks; calculate a predicted probability of an effectiveness of a therapy based on the one or more features; and calculate an updated probability of an effectiveness of a therapy, using one or more activation functions of one or more second portions of the one or more neural networks, based on a concatenation of the predicted probability and one or more normalized features extracted from clinical metadata.
  2. 2 . The one or more processors of claim 1 , wherein the one or more neural networks are trained by at least: determining an aggregate image-based treatment probability based on image-based treatment probabilities determined for a plurality of images; normalizing the aggregate image-based treatment probability and the clinical metadata to obtain a plurality of input features that are to be used to train at least a portion of the one or more neural networks; and training at least one portion of one or more portions of the one or more neural networks to obtain a set of weights that indicates how impactful each feature is to determining the therapy.
  3. 3 . The one or more processors of claim 2 , wherein at least one portion of one or more portions of the one or more neural networks is trained using logistic regression to generate an output of effectiveness of the therapy.
  4. 4 . The one or more processors of claim 3 , wherein the output is a probability that the therapy should be administered to a patient.
  5. 5 . The one or more processors of claim 2 , wherein a pre-trained classification network is used to calculate the predicted probability for the segmented images.
  6. 6 . The one or more processors of claim 2 , wherein the medical imaging data comprises a computer tomography (CT) scan and the plurality of images comprise a plurality of slices of the CT scan.
  7. 7 . The one or more processors of claim 1 , wherein the therapy is a treatment of COVID-19.
  8. 8 . A system comprising: one or more processors to: receive medical imaging data representing one or more CT scan images; generate one or more segmented images, using one or more neural networks to identify areas of interest in the medical imaging data; extract one or more features from the one or more segmented images using a first portion of the one or more neural networks; calculate a predicted probability of an effectiveness of a therapy based on the one or more features; and calculate an updated probability of an effectiveness of a therapy, using one or more activation functions of one or more second portions of the one or more neural networks, based on a concatenation of the predicted probability and one or more normalized features extracted from clinical metadata.
  9. 9 . The system of claim 8 , wherein the one or more neural networks are trained by at least: determining an image-based treatment probability of a patient based on one or more chest computed tomography (CT) images; normalizing the image-based treatment probability and the clinical metadata to obtain a plurality of input features that are to be used to train at least a portion of the one or more neural networks; and training at least one portion of one or more portions of the one or more neural networks to obtain a set of weights that indicates how impactful each feature is to determining the therapy.
  10. 10 . The system of claim 8 , wherein at least a portion of the clinical metadata is collected from a patient upon admission to a health care facility.
  11. 11 . The system of claim 8 , wherein the clinical metadata comprises a plurality of laboratory findings.
  12. 12 . The system of claim 11 , wherein the plurality of laboratory findings include measurements of a patient's levels of lactate dehydrogenase and C-reactive protein.
  13. 13 . The system of claim 8 , wherein a patient is diagnosed with a type of coronavirus-based infectious disease.
  14. 14 . A non-transitory machine-readable medium having stored thereon a set of instructions, which if performed by one or more processors, cause the one or more processors to at least: receive medical imaging data representing one or more CT scan images; generate one or more segmented images, using one or more neural networks to identify areas of interest in the medical imaging data; extract one or more features from the one or more segmented images using a first portion of the one or more neural networks; calculate a predicted probability of an effectiveness of a therapy based on the one or more features; and calculate an updated probability of an effectiveness of a therapy, using one or more activation functions of one or more second portions of the one or more neural networks, based on a concatenation of the predicted probability and one or more normalized features extracted from clinical metadata.
  15. 15 . The non-transitory machine-readable medium of claim 14 , wherein the one or more neural networks are to be trained by at least: determining an image-based treatment probability based on the one or more features from the one or more segmented images; and training at least one portion of one or more portions of the one or more neural networks to obtain a set of weights that indicates how impactful the segmented images and the clinical metadata are to determining a probability of an effectiveness of the therapy.
  16. 16 . The non-transitory machine-readable medium of claim 15 , wherein a deep learning framework is used to calculate the predicted probability of the effectiveness of the therapy based on the one or more features.
  17. 17 . The non-transitory machine-readable medium of claim 16 , wherein the deep learning framework utilizes an EfficientNet-based convolutional neural network to extract features which are used to determine the image-based treatment probability of the segmented images.
  18. 18 . The non-transitory machine-readable medium of claim 15 , wherein the one or more neural networks use a multi-modal deep learning framework to learn the set of weights.
  19. 19 . The non-transitory machine-readable medium of claim 16 , wherein the segmented images and the clinical metadata are used to identify a plurality of normalized input features to the deep learning framework that share a common mean and variance.
  20. 20 . A processor comprising: one or more circuits to train one or more neural networks to: receive medical imaging data representing one or more CT scan images; generate one or more segmented images, using the one or more neural networks to identify areas of interest in the medical imaging data; extract one or more features from the one or more segmented images using a first portion of the one or more neural networks; calculate a predicted probability of an effectiveness of a therapy based on the one or more features; and calculate an updated probability of an effectiveness of a therapy, using one or more activation functions of one or more second portions of the one or more neural networks, based on a concatenation of the predicted probability and one or more normalized features extracted from clinical metadata.

Description

TECHNICAL FIELD At least one embodiment pertains to machine-learning techniques for oxygen therapy prediction in patients that have or are suspected to have COVID-19 or various other diseases. For example, at least one embodiment pertains to one or more neural networks trained using computer tomography (CT) images and clinical metadata to predict disease progression of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) or other coronaviruses in patients. BACKGROUND Predicting disease progression of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) or other infectious diseases in patients is difficult. Machine learning techniques can be utilized to better predict disease progression. BRIEF DESCRIPTION OF DRAWINGS FIG. 1 illustrates a computing environment in which a treatment for a patient is determined using one or more neural networks trained based, at least in part on, medical imaging data and clinical metadata, according to at least one embodiment; FIG. 2 illustrates an example of a deep learning pipeline using medical imaging data, according to at least one embodiment; FIG. 3 shows an illustrative example of a process to train one or more neural networks using medical imaging data and clinical metadata, in accordance with at least one embodiment; FIG. 4 shows an illustrative example of a process to determine a treatment for a subject using one or more neural networks trained based, at least in part on, medical imaging data and clinical metadata, in accordance with at least one embodiment; FIG. 5A illustrates inference and/or training logic, according to at least one embodiment; FIG. 5B illustrates inference and/or training logic, according to at least one embodiment; FIG. 6 illustrates training and deployment of a neural network, according to at least one embodiment; FIG. 7 illustrates an example data center system, according to at least one embodiment; FIG. 8A illustrates an example of an autonomous vehicle, according to at least one embodiment; FIG. 8B illustrates an example of camera locations and fields of view for the autonomous vehicle of FIG. 8A, according to at least one embodiment; FIG. 8C is a block diagram illustrating an example system architecture for the autonomous vehicle of FIG. 8A, according to at least one embodiment; FIG. 8D is a diagram illustrating a system for communication between cloud-based server(s) and the autonomous vehicle of FIG. 8A, according to at least one embodiment; FIG. 9 is a block diagram illustrating a computer system, according to at least one embodiment; FIG. 10 is a block diagram illustrating a computer system, according to at least one embodiment; FIG. 11 illustrates a computer system, according to at least one embodiment; FIG. 12 illustrates a computer system, according to at least one embodiment; FIG. 13A illustrates a computer system, according to at least one embodiment; FIG. 13B illustrates a computer system, according to at least one embodiment; FIG. 13C illustrates a computer system, according to at least one embodiment; FIG. 13D illustrates a computer system, according to at least one embodiment; FIGS. 13E and 13F illustrate a shared programming model, according to at least one embodiment; FIG. 14 illustrates exemplary integrated circuits and associated graphics processors, according to at least one embodiment; FIGS. 15A and 15B illustrate exemplary integrated circuits and associated graphics processors, according to at least one embodiment; FIGS. 16A and 16B illustrate additional exemplary graphics processor logic according to at least one embodiment; FIG. 17 illustrates a computer system, according to at least one embodiment; FIG. 18A illustrates a parallel processor, according to at least one embodiment; FIG. 18B illustrates a partition unit, according to at least one embodiment; FIG. 18C illustrates a processing cluster, according to at least one embodiment; FIG. 18D illustrates a graphics multiprocessor, according to at least one embodiment; FIG. 19 illustrates a multi-graphics processing unit (GPU) system, according to at least one embodiment; FIG. 20 illustrates a graphics processor, according to at least one embodiment; FIG. 21 is a block diagram illustrating a processor micro-architecture for a processor, according to at least one embodiment; FIG. 22 illustrates a deep learning application processor, according to at least one embodiment; FIG. 23 is a block diagram illustrating an example neuromorphic processor, according to at least one embodiment; FIG. 24 illustrates at least portions of a graphics processor, according to one or more embodiments; FIG. 25 illustrates at least portions of a graphics processor, according to one or more embodiments; FIG. 26 illustrates at least portions of a graphics processor, according to one or more embodiments; FIG. 27 is a block diagram of a graphics processing engine of a graphics processor in accordance with at least one embodiment; FIG. 28 is a block diagram of at least portions of a g