Search

US-12620090-B2 - Analysis of fundus autofluorescence images

US12620090B2US 12620090 B2US12620090 B2US 12620090B2US-12620090-B2

Abstract

Systems and methods herein provide analysis of fundus autofluorescence (FAF) images. A set of FAF images are received, each FAF image in the set of FAF images containing at least a portion of an eye of a patient. Each FAF image in the set of FAF images is processed by generating a set of features based on the FAF image, applying the set of features to a deep learning model to determine a gradeability status of the FAF image, and responsive to determining the FAF image is gradable, applying the set of features to a machine-learned model to determine a quality score for the FAF image. Responsive to processing each FAF image in the set of FAF images, a highest quality gradable FAF image of the patient is determined, where the highest quality gradable FAF image is the FAF image having the highest quality score.

Inventors

  • Zhongdi Chu
  • Aishwarya Ramakrishnan
  • Kim Ngoc Le
  • Ketki Suresh Khapare
  • Carlos Gutierrez Candano

Assignees

  • VERANA HEALTH, INC.

Dates

Publication Date
20260505
Application Date
20230516

Claims (20)

  1. 1 . A system comprising: one or more processors of a machine; and a memory storing instructions that, when executed by the one or more processors, cause the machine to perform operations comprising: accessing a fundus autofluorescence (FAF) image from among a set of FAF images, the FAF image among the set of FAF images comprising a depiction of at least a portion of an eye of a patient; processing the FAF image from among the set of FAF images, wherein processing the FAF image comprises: determining a set of features based on the FAF image, applying the set of features to a deep learning model responsive to the generating the set of features based on the FAF image, wherein the deep learning model has been trained to determine a gradeability status of the FAF image the set of features, and applying the set of features to a machine-learned (ML) model based on the gradeability status, the ML model providing a quality score for the FAF image associated with the set of features; and responsive to processing the FAF image from among the set of FAF images, causing display of a presentation of the FAF image based on the processing of the FAF image.
  2. 2 . The system of claim 1 , wherein causing the display of the presentation of the FAF image further comprises: determining the quality score exceeds a threshold value; and presenting a notification at a client device responsive to the determining the score exceeds the threshold value.
  3. 3 . The system of claim 2 , the operations further comprising: determining a ranking of the FAF image among the set of FAF images based on the quality score; and presenting the FAF image at a position among the set of FAF images based on the ranking.
  4. 4 . The system of claim 2 , wherein the threshold value is a clinical trial threshold quality score associated with a clinical trial, further comprising: prompting the user of the client device to include the patient in the clinical trial.
  5. 5 . The system of claim 1 , the operations further comprising: determining the quality score of the FAF image is less than a threshold quality score; and prompting the user of the client device to re-capture the FAF image.
  6. 6 . The system of claim 1 , wherein the gradeability status of the FAF image is one of gradable or non-gradable.
  7. 7 . The system of claim 6 , wherein processing the FAF image further comprises: responsive to determining the gradeability status of the FAF image is non-gradable, removing the image from the set of FAF images.
  8. 8 . The system of claim 1 , wherein determining the set of features based on the FAF image comprises: determining a set of global features based on the FAF image, each global feature in the set of global features being based on one or more quality metrics of the FAF image; determining a set of patches, each patch comprising a portion of the FAF image; determining a set of local features based on the set of patches, each local feature in the set of local features being based on one or more quality metrics of a patch in the set of patches; and determining the set of features, the set of features including the set of global features and the set of local features.
  9. 9 . The system of claim 8 , wherein determining the set of patches is based at least in part on identifying one or more anatomical structures in the FAF image.
  10. 10 . A method comprising: accessing a fundus autofluorescence (FAF) image from among a set of FAF images, the FAF image among the set of FAF images comprising a depiction of at least a portion of an eye of a patient; processing the FAF image from among the set of FAF images, wherein processing the FAF image comprises: determining a set of features based on the FAF image, applying the set of features to a deep learning model responsive to the generating the set of features based on the FAF image, wherein the deep learning model has been trained to determine a gradeability status of the FAF image the set of features, and applying the set of features to a machine-learned (ML) model based on the gradeability status, the ML model providing a quality score for the FAF image associated with the set of features; and responsive to processing the FAF image from among the set of FAF images, causing display of a presentation of the FAF image based on the processing of the FAF image.
  11. 11 . The method of claim 10 , wherein causing the display of the presentation of the FAF image further comprises: determining the quality score exceeds a threshold value; and presenting a notification at a client device responsive to the determining the score exceeds the threshold value.
  12. 12 . The method of claim 11 , further comprising: determining a ranking of the FAF image among the set of FAF images based on the quality score; and presenting the FAF image at a position among the set of FAF images based on the ranking.
  13. 13 . The method of claim 11 , wherein the threshold value is a clinical trial threshold quality score associated with a clinical trial, further comprising: prompting the user of the client device to include the patient in the clinical trial.
  14. 14 . The method of claim 10 , further comprising: determining the quality score of the FAF image is less than a threshold quality score; and prompting the user of the client device to re-capture the FAF image.
  15. 15 . The method of claim 10 , wherein the gradeability status of the FAF image is one of gradable or non-gradable.
  16. 16 . The method of claim 15 , wherein processing the FAF image further comprises: responsive to determining the gradeability status of the FAF image is non-gradable, removing the image from the set of FAF images.
  17. 17 . The method of claim 10 , wherein determining the set of features based on the FAF image comprises: determining a set of global features based on the FAF image, each global feature in the set of global features being based on one or more quality metrics of the FAF image; determining a set of patches, each patch comprising a portion of the FAF image; determining a set of local features based on the set of patches, each local feature in the set of local features being based on one or more quality metrics of a patch in the set of patches; and determining the set of features, the set of features including the set of global features and the set of local features.
  18. 18 . The method of claim 17 , wherein determining the set of patches is based at least in part on identifying one or more anatomical structures in the FAF image.
  19. 19 . A non-transitory computer-readable storage medium, the computer-readable storage medium including instructions that when executed by a computer, cause the computer to: access a fundus autofluorescence (FAF) image from among a set of FAF images, the FAF image among the set of FAF images comprising a depiction of at least a portion of an eye of a patient; process the FAF image from among the set of FAF images, wherein processing the FAF image comprises: determine a set of features based on the FAF image, apply the set of features to a deep learning model responsive to the generating the set of features based on the FAF image, wherein the deep learning model has been trained to determine a gradeability status of the FAF image the set of features, and apply the set of features to a machine-learned (ML) model based on the gradeability status, the ML model providing a quality score for the FAF image associated with the set of features; and responsive to processing the FAF image from among the set of FAF images, cause display of a presentation of the FAF image based on the processing of the FAF image.
  20. 20 . The non-transitory computer readable storage medium of claim 19 , wherein determining the set of features based on the FAF image comprises: determining a set of global features based on the FAF image, each global feature in the set of global features being based on one or more quality metrics of the FAF image; determining a set of patches, each patch comprising a portion of the FAF image; determining a set of local features based on the set of patches, each local feature in the set of local features being based on one or more quality metrics of a patch in the set of patches; and determining the set of features, the set of features including the set of global features and the set of local features.

Description

BACKGROUND Ophthalmic images are routinely used in clinical practice and clinical research for various ocular diseases. For many severe ocular diseases, including various end-stage macular degeneration, such as geographic atrophy, ophthalmic images such as fundus autofluorescence (FAF) images are commonly used to diagnose and track disease progression. When interpreting ophthalmic images, it is important for the images to be of sufficient image quality to avoid incorrect interpretation and annotation, which can cause delayed or incorrect disease diagnosis, patient mismanagement in clinical practice, failed patient screening, and/or wasted resources in clinical trials and clinical research. FAF imaging is a non-invasive imaging modality that has become increasingly popular in both clinical research and clinical practice settings due to its ability to map naturally and pathologically occurring fluorophores in the posterior segment. FAF imaging is particularly useful in the diagnosis and management of retinal dystrophies, including but not limited to geographic atrophy, choroidal dystrophies, retinitis pigmentosa, and other similar ocular diseases. For example, in geographic atrophy clinical trials, FAF images are used to screen patients suitable for the clinical trial as well as to measure geographic atrophy lesion size changes as the primary efficacy endpoint. Conventionally, FAF images collected in clinical trials are interpreted manually by highly specialized graders in a central reading center. The highly specialized graders review FAF images and determine whether they are high enough quality to be gradable. This time-consuming and resource-intensive manual process is possible in clinical trial settings, but is not scalable to real-world medical practice settings. This approach also relies heavily on subjective human judgement, making it prone to human error and bias. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced. FIG. 1 is a diagrammatic representation of a networked environment in which the present disclosure may be deployed, according to some examples. FIG. 2 is a diagrammatic representation of an image processing system, according to some examples. FIG. 3A is a flowchart for a method of FAF image analysis, according to some examples. FIG. 3B is a flowchart for a method of FAF image analysis, according to some examples. FIG. 3C is a flowchart for a method of FAF image analysis, according to some examples. FIG. 3D is a flowchart for a method of FAF image analysis, according to some examples. FIG. 4 is a flowchart of a method for training a gradeability model, according to some examples. FIG. 5A is a flowchart of a method for training a quality model, according to some examples. FIG. 5B is a flowchart of a method for training a quality model, according to some examples. FIG. 6 is a diagrammatic representation of a machine in the form of a computer system within which a set of instructions may be executed to cause the machine to perform any one or more of the methodologies discussed herein, according to some examples. DETAILED DESCRIPTION Systems and methods presented herein provide a means for FAF image quality analysis to aide in diagnosis and evaluation in clinical settings, including clinical trials. In a two-step approach, a FAF image is first processed to determine a gradeability status by a gradeability model, such as a deep learning model trained to determine a gradeability status. The gradeability model performs analysis conventionally performed by the highly specialized graders. The gradeability status can be “gradable” or “non-gradable,” according to exemplary embodiments. A non-gradable image is not analyzed further. A gradable image is further analyzed by a quality model. The quality model, such as a machine-learned model trained to determine image quality, performs additional processing to provide a quality score, which quantifies the quality of the FAF image. The FAF image and analysis results, including gradeability status and, if applicable, quality score, are provided for display in a presentation. The presentation may contain other information to aide a clinician. For example, the presentation can indicate whether the FAF image is of sufficient quality for clinical use, and if not, prompt the clinician to re-capture the image before the patient associated with the FAF image leaves the clinical setting, thereby improving efficiency of providing healthcare. Additionally or alternatively, the presentation can include whether the patient qualifies for inclusion in a particular clinical trial based on the FAF image analysis results. Moreover, in the event the FAF image is one of a set of multiple FAF images associated with the same patient on the same day, the presentation may determine the