US-12618965-B2 - Medical ultrasound imaging optimization using a machine-learned network
Abstract
Machine learning network trained to tune settings and optimize images. In accordance with one aspect, a method is provided for image optimization with a medical ultrasound scanner. A medical ultrasound scanner images a patient using first settings. A first image from the imaging using the first settings and patient information for the patient are input to a machine-learned network. The machine-learned network outputs second settings in response to the inputting of the first image and the patient information. The medical ultrasound scanner re-images the patient using the second settings. A second image from the re-imaging is displayed.
Inventors
- Miroslav Gajdos
- Bimba Rao
Assignees
- SIEMENS MEDICAL SOLUTIONS USA, INC.
Dates
- Publication Date
- 20260505
- Application Date
- 20221010
Claims (16)
- 1 . A method for image optimization with a medical ultrasound scanner, the method comprising: imaging, by a medical ultrasound scanner, a patient using first settings; inputting a first image from the imaging using the first settings and patient information for the patient to a machine-learned network, wherein the machine learned network is trained based on training image data having ground truth labels inferred by a computer based at least in part on contextual data produced from user workflow in patient examination, wherein the contextual data is indicative of acceptance or rejection of the training image data for diagnostic use during the user workflow; wherein the computer infers a positive ground label in response to the contextual data being indicative of acceptance of the training image data for diagnostic use when the contextual data indicates storage or capture of the training image data in a patient medical record, and a negative ground truth label in response to the contextual data being indicative of rejection of the training image data for diagnostic use when the contextual data indicates failure to save the training image data in a patient medical record or overwriting of the training image data; outputting, by the machine-learned network, second settings in response to the inputting of the first image and the patient information for facilitating efficient patient-specific ultrasound image optimization by automatic tuning of imaging parameters specific to patient's situation; re-imaging, by the medical ultrasound scanner, the patient using the second settings; and displaying a second image from the re-imaging for providing improved ultrasound image suited for diagnosis.
- 2 . The method of claim 1 wherein imaging comprises imaging by the medical ultrasound scanner operated by a user, wherein inputting comprises inputting user information for the user, and wherein outputting comprises outputting in response to the inputting of the user information.
- 3 . The method of claim 1 wherein inputting comprises inputting the first image, the patient information and a location of the medical ultrasound scanner, and wherein outputting comprises outputting in response to the inputting of the location.
- 4 . The method of claim 1 further comprising notifying a user of the medical ultrasound scanner of the second settings, wherein the user triggers the re-imaging using the second settings by a response to the notification.
- 5 . The method of claim 1 wherein the contextual data comprises scanner log data.
- 6 . The method of claim 1 wherein inputting comprises inputting to the machine-learned network, the machine-learned network having been trained based on images for other patients and corresponding settings labelled as negative examples when not stored for the other patients and images for the other patients and corresponding settings labelled as positive examples when stored for the other patients.
- 7 . The method of claim 1 wherein the second settings comprise transmit frequency, receive frequency, scan line format, scan line density, pulse repetition frequency, overall gain, depth gain, dynamic range, focal depth, scan depth, focal position, filter kernel, spatial filter parameters, temporal filter parameters, noise thresholds, motion thresholds, color mapping, three-dimensional rendering parameters, or a combination thereof.
- 8 . An imaging system comprising: an ultrasound scanner configurable based on first and second values of imaging parameters; a processor configured to determine the second values of the imaging parameters with a machine-learned network in response to input of a first image of a patient, the first values of the imaging parameters used for the first image and patient information for the patient for facilitating efficient patient-specific ultrasound image optimization by automatic tuning of imaging parameters specific to patient's situation, wherein the machine-learned network is trained based on training image data having ground truth labels-inferred by the processor based at least in part on contextual data produced from user workflow in patient examination, wherein the contextual data is indicative of acceptance or rejection of the training image data for diagnostic use during the user workflow; wherein the processor infers a positive ground label in response to the contextual data being indicative of acceptance of the training image data for diagnostic use when the contextual data indicates storage or capture of the training image data in a patient medical record, and a negative ground truth label in response to the contextual data being indicative of rejection of the training image data for diagnostic use when the contextual data indicates failure to save the training image data in a patient medical record or overwriting of the training image data; and a display configured to display a second image of the patient generated by the ultrasound scanner configured by the second values for providing improved ultrasound image suited for diagnosis.
- 9 . The imaging system of claim 8 wherein the processor infers the ground truth labels based on scanner log data.
- 10 . The imaging system of claim 8 wherein the contextual data is indicative of rejection of the training image data for diagnostic use when the contextual data indicates a repeated imaging.
- 11 . The imaging system of claim 8 wherein the contextual data is indicative of acceptance of the training image data for diagnostic use when the contextual data indicates storage of the training image data in a patient medical record.
- 12 . The imaging system of claim 8 wherein the processor is configured to determine the second values in response to the input of the first image, the first values, the patient information and a location of the ultrasound scanner.
- 13 . The imaging system of claim 12 wherein the patient information comprises age, gender, role, or a combination thereof.
- 14 . The imaging system of claim 8 wherein the processor is configured to determine the second values in response to the input of the first image, the first values, the patient information and user information for a user of the ultrasound scanner.
- 15 . The imaging system of claim 8 wherein the machine learned network comprises an artificial neural network.
- 16 . One or more non-transitory computer-readable media embodying instructions executable by machine to perform operations for image optimization, the operations comprising: receiving, from a medical ultrasound scanner, a first image of a patient imaged using first settings; inputting the first image and patient information for the patient to a machine-learned network, wherein the machine-learned network is trained based on training image data having ground truth labels inferred by a computer based at least in part on contextual data produced from user workflow in patient examination, wherein the contextual data is indicative of acceptance or rejection of the training image data for diagnostic use during the user workflow; wherein the computer infers a positive ground label in response to the contextual data being indicative of acceptance of the training image data for diagnostic use when the contextual data indicates storage or capture of the training image data in a patient medical record, and a negative ground truth label in response to the contextual data being indicative of rejection of the training image data for diagnostic use when the contextual data indicates failure to save the training image data in a patient medical record or overwriting of the training image data; generating, by the machine-learned network, second settings in response to the inputting of the first image and the patient information for facilitating efficient patient-specific ultrasound image optimization by automatic tuning of imaging parameters specific to patient's situation; triggering re-imaging, by the medical ultrasound scanner, of the patient using the second settings; and displaying a second image from the re-imaging for providing improved ultrasound image suited for diagnosis.
Description
CROSS-REFERENCE TO RELATED APPLICATION This application is a division of U.S. application Ser. No. 15/984,502 filed on May 21, 2018, the contents of which are herein incorporated by reference. BACKGROUND The present embodiments relate to medical ultrasound imaging. An increasing and aging patient population is creating a demand for improved healthcare efficiency. This has led to a desire for ultrasound imaging workflow improvement from the standpoint of increased patient throughput, reducing examination times and user stress through repetitive motion, and better standardization of examinations. Part of the ultrasound workflow is tuning the imaging parameters to get to the best image suited for diagnosis of each patient. The tuning to find this image is a time consuming and challenging task. It is difficult to find a “one size fits all” system setting that can produce satisfactory diagnostic images across different patient types, anatomies, and pathologies. It is also difficult to find one setting that meets different user preferences across various global regions. A user having to tune the settings for every patient examination leads to increased examination time, inefficient workflow, operator fatigue and even reduced diagnostic confidence. In current day products, the problem is addressed by creating presets in the factory for different patient types and applications. These factory presets work to a certain extent but cannot cover the large variety of patient types and do not address user preferences. There are some knowledge-based techniques that use artificial intelligence or other techniques to segment the image. The segmentation may then be used to set the imaging system parameters such as frequency, focus, or depth, but the segmentation is focused on the anatomy. This approach requires an expert review to create training data and does not address other patient variability nor user preferences. SUMMARY By way of introduction, the framework described below includes methods, systems, instructions, and computer readable media for machine learning to tune settings and optimize images using a machine-learned network. In accordance with one aspect, a method is provided for image optimization with a medical ultrasound scanner. A medical ultrasound scanner images a patient using first settings. A first image from the imaging using the first settings and patient information for the patient are input to a machine-learned network. The machine-learned network outputs second settings in response to the inputting of the first image and the patient information. The medical ultrasound scanner re-images the patient using the second settings. A second image from the re-imaging is displayed. BRIEF DESCRIPTION OF THE DRAWINGS The components and the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. Moreover, in the figures, like reference numerals designate corresponding parts throughout the different views. FIG. 1 is a flow chart diagram of one embodiment of a method for machine learning to tune and application of a machine-learned network to tune in medical ultrasound imaging; FIG. 2 illustrates an example workflow for collecting training data from on-going patient examination; FIG. 3 illustrates an example machine learning network architecture; FIG. 4 illustrates an example workflow for using a machine-learned network to provide settings for ultrasound imaging; and FIG. 5 is a block diagram of one embodiment of a system for tuned ultrasound imaging. DETAILED DESCRIPTION OF THE DRAWINGS AND PRESENTLY PREFERRED EMBODIMENTS Patient-specific ultrasound image optimization is provided. Artificial intelligence (AI)-based algorithms enable an ultrasound scanner to “learn from experience” by analyzing data produced as part of patient imaging to determine ground truth. The capabilities of AI are used to automate tuning of imaging parameters and so improve ultrasound imaging workflow. The tuning may be specific to the patient situation, location of imaging, and/or user by applying patient information, location, and/or prior user selection with the trained AI. Imaging parameters are automatically tuned for different patients, anatomies, user preferences, regional preferences, types of view, and/or pathologies situations to provide an ultrasound image suited for diagnosis. Imaging may be customized to every patient as well as user, leading to significant increase in diagnostic confidence and customer satisfaction. Examination time may be reduced, and patient throughput may be increased. To gather the training data, low and high-quality images and corresponding settings are derived from the user workflows in examining patients. There is no need to manually label the images by an expert, as is the case with most supervised learning AI algorithms. This may result in savings in development time and effort. Artificial neural networks (ANN) are used in function approxim