Search

US-20260127861-A1 - METHODS OF AND SYSTEMS FOR TRAINING MACHINE LEARNING PROCESSES USING MEDICAL IMAGERY

US20260127861A1US 20260127861 A1US20260127861 A1US 20260127861A1US-20260127861-A1

Abstract

A computing device configured to receive at least a radiological image, encode, using an encoding module, a low-dimensional image as a function of the radiological image, wherein the encoding module is configured to preprocess the radiological image, generate a segmented representation of the radiological image by applying edge detection to the preprocessed radiological image, generate a reduced contour set by performing contour simplification and dimensionality reduction on the segmented representation, and generate a color-coded contour map by coloring coding one or more contours of the reduced contour set, and input the low-dimensional image into at least a machine learning process as training data.

Inventors

  • Arjun PURANIK

Assignees

  • ANUMANA, INC.

Dates

Publication Date
20260507
Application Date
20251104

Claims (20)

  1. 1 . A system for encoding a low-dimensional image comprising; at least a computing device configured to: receive at least a radiological image; encode, using an encoding module, a low-dimensional image as a function of the radiological image, wherein the encoding module is configured to: preprocess the radiological image; generate a segmented representation of the radiological image by applying edge detection to the preprocessed radiological image; generate a reduced contour set by performing contour simplification and dimensionality reduction on the segmented representation; and generate a color-coded contour map by coloring coding one or more contours of the reduced contour set; and input the low-dimensional image into at least a machine learning process as training data.
  2. 2 . The system of claim 1 , wherein preprocessing the radiological image comprises performing: noise reduction by applying a median filter to replace an intensity value at a pixel location with a median intensity value taken from a defined neighborhood around that pixel location; and contrast enhancement including histogram equalization to improve visual salience of thin anatomical features that appear faint in the radiological image.
  3. 3 . The system of claim 1 , wherein the encoding module is further configured to validate the low-dimensional image using an encoder-decoder model to reconstruct the radiological image from the low-dimensional image, wherein differences between the reconstructed radiological image and the received at least a radiological image are used to score adequacy of the low-dimensional image.
  4. 4 . The system of claim 1 , wherein generating the segmented representation comprises implementing a segmentation module of the encoding module configured to perform contour detection and extraction based on detected edges of the preprocessed radiological image.
  5. 5 . The system of claim 1 , wherein the machine learning process comprises a generative machine learning process.
  6. 6 . The system of claim 5 , wherein the generative machine learning process comprises a generative predictive transformer configured to predict at least one predicted low-dimensional image as a function of least one low-dimensional image.
  7. 7 . The system of claim 1 , wherein the radiological image comprises at least an ultrasound image.
  8. 8 . The system of claim 1 , wherein the radiological image comprises one or more of a computed tomography image, a magnetic resonance imaging image, an X-ray image, a fluoroscopy image, and photoacoustic image.
  9. 9 . The system of claim 1 , wherein contour simplification comprises modifying one or more contours in the radiological image so that each contour is represented using fewer points while preserving clinically relevant geometric structure.
  10. 10 . The system of claim 1 , wherein dimensionality reduction comprises transforming data describing one or more contours from a higher-dimensional coordinate description into a lower-dimensional description that preserves salient geometric relationship.
  11. 11 . A method of encoding a low-dimensional image, the method comprising: receiving, by a computing device, at least a radiological image; encoding, by the computing device, using an encoding module, a low-dimensional image as a function of the radiological image by: preprocessing the radiological image; generating a segmented representation of the radiological image by applying edge detection to the preprocessed radiological image; generating a reduced contour set by performing contour simplification and dimensionality reduction on the segmented representation; and generating a color-coded contour map by coloring coding one or more contours of the reduced contour set; and inputting, by the computing device, the low-dimensional image into at least a machine learning process as training data.
  12. 12 . The method of claim 11 , wherein preprocessing the radiological image comprises performing: noise reduction by applying a median filter to replace an intensity value at a pixel location with a median intensity value taken from a defined neighborhood around that pixel location; and contrast enhancement including histogram equalization to improve visual salience of thin anatomical features that appear faint in the radiological image.
  13. 13 . The method of claim 11 , wherein the encoding module is further configured to validate the low-dimensional image using an encoder-decoder model to reconstruct the radiological image from the low-dimensional image, wherein differences between the reconstructed radiological image and the received at least a radiological image are used to score adequacy of the low-dimensional image.
  14. 14 . The method of claim 11 , wherein generating the segmented representation comprises implementing a segmentation module of the encoding module configured to perform contour detection and extraction based on detected edges of the preprocessed radiological image.
  15. 15 . The method of claim 11 , wherein the machine learning process comprises a generative machine learning process.
  16. 16 . The method of claim 15 , wherein the generative machine learning process comprises a generative predictive transformer configured to predict at least one predicted low-dimensional image as a function of least one low-dimensional image.
  17. 17 . The method of claim 11 , wherein the radiological image comprises at least an ultrasound image.
  18. 18 . The method of claim 11 , wherein the radiological image comprises one or more of a computed tomography image, a magnetic resonance imaging image, an X-ray image, a fluoroscopy image, and photoacoustic image.
  19. 19 . The method of claim 11 , wherein contour simplification comprises modifying one or more contours in the radiological image so that each contour is represented using fewer points while preserving clinically relevant geometric structure.
  20. 20 . The method of claim 11 , wherein dimensionality reduction comprises transforming data describing one or more contours from a higher-dimensional coordinate description into a lower-dimensional description that preserves salient geometric relationship.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application claims the benefit of priority of U.S. Provisional Patent Application Ser. No. 63/715,700, filed on Nov. 4, 2024, and titled “METHODS AND SYSTEMS FOR TRAINING MACHINE LEARNING PROCESSES USING MEDICAL IMAGERY,” which is incorporated by reference herein in its entirety. FIELD OF THE INVENTION The present invention generally relates to the field of medical imagery. In particular, the present invention is directed to methods and systems for training machine learning processes using medical imagery. BACKGROUND Medical radiological imagery has existed for decades and hospitals have access to these data. Machine learning using these data would be advantageous. SUMMARY OF THE DISCLOSURE In another aspect, a system includes at least a computing device configured to receive at least a radiological image, encode, using an encoding module, a low-dimensional image as a function of the radiological image, wherein the encoding module is configured to preprocess the radiological image, generate a segmented representation of the radiological image by applying edge detection to the preprocessed radiological image, generate a reduced contour set by performing contour simplification and dimensionality reduction on the segmented representation, and generate a color-coded contour map by coloring coding one or more contours of the reduced contour set, and input the low-dimensional image into at least a machine learning process as training data. In an aspect, a method includes using at least a computing device to receive at least a radiological image, encode, using an encoding module, a low-dimensional image as a function of the radiological image, wherein the encoding module is configured to preprocess the radiological image, generate a segmented representation of the radiological image by applying edge detection to the preprocessed radiological image, generate a reduced contour set by performing contour simplification and dimensionality reduction on the segmented representation, and generate a color-coded contour map by coloring coding one or more contours of the reduced contour set, and input the low-dimensional image into at least a machine learning process as training data. These and other aspects and features of non-limiting embodiments of the present invention will become apparent to those skilled in the art upon review of the following description of specific non-limiting embodiments of the invention in conjunction with the accompanying drawings. BRIEF DESCRIPTION OF THE DRAWINGS For the purpose of illustrating the invention, the drawings show aspects of one or more embodiments of the invention. However, it should be understood that the present invention is not limited to the precise arrangements and instrumentalities shown in the drawings, wherein: FIG. 1 is a block diagram showing a system for training of machine learning processes using medical imagery; FIG. 2 illustrates an exemplary radiological image; FIG. 3 illustrates an exemplary low-dimensional image; FIG. 4 illustrates exemplary machine learning processes according to some embodiments; FIG. 5A represents an exemplary neural network; FIG. 5B illustrates exemplary machine learning process; FIG. 6A is an exemplary illustration of a graphical user interface displaying image data; FIG. 6B is an exemplary illustration of a graphical user interface displaying image data including a target structure; FIG. 7 is a flow diagram showing a method of training of machine learning processes using medical imagery; and FIG. 8 is a block diagram of a computing system that can be used to implement any one or more of the methodologies disclosed herein and any one or more portions thereof. The drawings are not necessarily to scale and may be illustrated by phantom lines, diagrammatic representations and fragmentary views. In certain instances, details that are not necessary for an understanding of the embodiments or that render other details difficult to perceive may have been omitted. DETAILED DESCRIPTION In some embodiments, aspects relate to contour-based encoding with limited color and complexity: In some cases, each radiological image (e.g., TEE frame (and by extension, the CT mesh-derived frames)) is represented with contours-lines or curves-on a white background. In some cases, by limiting these contours to, for example, 20 colors, and keeping the curves relatively simple, the dimensionality of the resulting image data is drastically reduced. In some versions, this reduction facilitates faster, more efficient analysis while maintaining essential structural information required for self-supervised learning (SSL). In some embodiments, aspects relate to low-dimensional space for SSL training: In some embodiments, low-dimensional images, e.g., the contours on a white background, define a space that is much simpler and lower-dimensional than typical medical images, yet specific enough to be a valid representation of relevant