US-12616371-B2 - Real-time tracheal mapping using optical coherence tomography and artificial intelligence
Abstract
A system and method are disclosed for real-time mapping of a target hollow internal body structure each as the trachea using ultrafast swept-source optical coherence tomography (SS-OCT) and artificial intelligence (AI). The system includes an SS-OCT imaging device that captures high-resolution 3D images of the trachea at very high frame rates, and an AI module that analyzes and interprets the images in real time. The AI module can recognize patterns and features in the images and make predictions about the tracheal anatomy or disease status. The system can provide a detailed, up-to-date model of the trachea in real time, and can be used for a variety of applications including diagnosis, surgery, and monitoring.
Inventors
- Ryan Redford
Assignees
- Lazzaro Medical, Inc.
Dates
- Publication Date
- 20260505
- Application Date
- 20240311
Claims (13)
- 1 . A method for real-time mapping of a human or animal hollow target internal body structure, using ultrafast swept-source optical coherence tomography (SS-OCT) and artificial intelligence (AI), comprising: providing a system for real-time mapping of a human or animal hollow target internal body structure, said system comprising: an ultrafast imaging device configured to capture 3D images of the hollow target internal body structure at very high frame rates; an artificial intelligence (AI) module configured to analyze and interpret the 3D images in real time; a display configured to display the 3D images and the AI predictions in real time, capturing 3D images of the hollow target internal body structure at very high frame rates using an ultrafast OCT device, according to the equation: I ( x , y , z , t ) = f ( O ( x , y , z , t ) ) where I(x, y, z, t) is the 3D image of the hollow target internal body structure at position (x, y, z) and time t, O(x, y, z, t) is the interferometric signal from the ultrafast OCT device at position (x, y, z) and time t, and f( ) is a function that maps the interferometric signal to the image; analyzing and interpreting the 3D images in real time using an AI module, according to the equation: p ( a | I ) = arg max f_θ ( I ) where p(a|I) is the probability of the AI prediction a given the image I, f_θ( ) is a machine learning model with parameters θ, and argmax denotes the argument that maximizes the model output; and displaying the 3D images and the AI predictions in real time.
- 2 . The method of claim 1 , wherein the ultrafast imaging device is a swept-source optical coherence tomography (SS-OCT) device.
- 3 . The method of claim 1 , wherein the AI module is a machine learning model trained on a dataset of OCT images and corresponding hollow target internal body structure anatomy or disease labels.
- 4 . The method of claim 1 , further comprising storing the 3D images and the AI predictions in a database, and/or further comprising using the real-time mapping of the hollow target internal body structure for one or more of: diagnosis, surgery, and monitoring.
- 5 . The method of claim 1 , wherein the hollow target internal body structure is selected from the group consisting of internal body structures of the mouth and throat such as the trachea, lungs, large and small intestines, bladder, nasal and ear canals, blood vessels and arteries.
- 6 . The method of claim 1 , further comprising storing the 3D images and the AI predictions in a database according to the equation: D[I(x, y, z, t), p(a|I)]=(x, y, z, t, a) where D[ ] is a function that stores the image I(x, y, z, t) and the AI prediction p(a|I) in the database, and (x, y, z, t, a) is a tuple representing the position, time, and prediction.
- 7 . The method of claim 1 , further comprising using the real-time mapping of the hollow target internal body structure for one or more of: diagnosis, surgery, and monitoring, according to the equation: U(I, p(a|I))=(d, s, m) where U( ) is a function that maps the image I and the AI prediction p(a|I) to the diagnostic, surgical, and monitoring outputs (d, s, m).
- 8 . The method of claim 1 , further comprising sparse coding-based image analysis: a = arg min ∑ ❘ "\[LeftBracketingBar]" I - D * a ❘ "\[RightBracketingBar]" ^ 2 + λ ∑ ❘ "\[LeftBracketingBar]" a ❘ "\[RightBracketingBar]" where a is the coefficient vector for the image I, D is the dictionary matrix, and A is a constant, and wherein coefficient vector optionally is obtained by minimizing the reconstruction error between the image I and its reconstruction D*a, using a spars ity-promoting regularization term λΣ|a|.
- 9 . The method of claim 1 , further comprising dictionary learning-based image analysis: D = arg min ∑ ∑ ❘ "\[LeftBracketingBar]" I - D * a ❘ "\[RightBracketingBar]" ^ 2 + λ ∑ ∑ ❘ "\[LeftBracketingBar]" a ❘ "\[RightBracketingBar]" where D is the dictionary matrix, a is the coefficient vector for the image I, and λ is a constant, and wherein the dictionary matrix D optionally is obtained by minimizing the reconstruction error between the image I and its reconstruction D*a, using a sparsity-promoting regularization term λΣΣ|a|.
- 10 . The method of claim 1 , further comprising deep learning-based image analysis: f_θ ( I ) = g ( h_θ ( I ) ) where f_θ( ) is a deep learning model with parameters θ, I is the input image, h_θ( ) is the hidden representation of the image, and g( ) is the output layer of the model, and wherein the model optionally is trained by minimizing a loss function L(y, f_θ(I)) that measures the difference between the true label y and the model output f_θ(I), using stochastic gradient descent or another optimization algorithm.
- 11 . The method of claim 1 , further comprising geometric transformation-based image correction: I′=T (I) where I is the input image, I′ is the corrected image, and T( ) is a geometric transformation function that maps the image I to the corrected image I′, wherein the transformation function T( ) optionally is a rotation, translation, scaling, or other type of transformation that is applied to the image I to correct for geometric distortion or other issues.
- 12 . The method of claim 1 , further comprising image denoising-based image correction: I′=argmin Σ|I−I′|{circumflex over ( )}2+λΣ|∇I′| where I is the input image, I′ is the corrected image, Σ|I−I′|{circumflex over ( )}2 is the reconstruction error between the image I and its denoised version I′, and Σ|∇I′| is a regularization term that promotes smoothness in the image I′, and wherein the corrected image I′ optionally is obtained by minimizing this energy function using an optimization algorithm such as gradient descent.
- 13 . A computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform the method of claim 1 .
Description
CROSS REFERENCE TO RELATED APPLICATION This application claims priority from U.S. Provisional Application Ser. No. 63/451,467, filed Mar. 10, 2023, the contents of which are incorporated herein in their entirety. TECHNICAL FIELD The present disclosure relates to medical imaging and systems and methods for medical imaging. The disclosure has particular applicability in the field of medical imaging of target hollow internal body structures of humans and animals such as the trachea and will be described in connection with such utility, although other utilities are contemplated. BACKGROUND AND SUMMARY The trachea is a vital organ that carries air to and from the lungs. It is susceptible to a variety of diseases and disorders, and accurate, real-time mapping of the trachea can be helpful for diagnosis, treatment, and monitoring of disease advancement and effect of treatment. Computed tomography currently is the modality of choice for imaging the trachea and bronchi. Computed tomography provides clear anatomical details on cross-sectional imaging and provides a direct display of tracheal bronchial anatomy. Magnetic resonance imaging (MRI) also has been used for imaging the trachea and bronchi. However, current imaging techniques of computed tomography (CT) and magnetic resonance imaging (MRI) are not well suited for real-time imaging of the trachea due to their low temporal resolution. Optical coherence tomography (OCT) is an image technique that uses low-coherence light, typically near-infrared light, to capture micrometer-resolution, two and three-dimension images from within optical scattering media such as biological tissue. OCT traditionally has been employed in non-invasive imaging techniques based on optical coherence that has been developed to visualize vascular networks in the human retina, choroid, skin, etc. OCT employs low-coherence interferometry to measure changes in back scattered light to differentiate areas of blood flow from the areas of static flow. Swept-source ultra-fast optical coherence tomography (UF-OCT) can provide high-resolution 3D images a hollow target internal body structures such as the trachea at very high frame rates. However, manual interpretation of UF-OCT images can be time-consuming and subjective, and there is a need for a more efficient and accurate method for analyzing and interpreting the images in real time. The present disclosure addresses these and other needs by providing a system and method for real-time mapping of a human or animal hollow target internal body structure such as the trachea using OCT/UF-OCT and artificial intelligence (AI). The system includes an UF-OCT imaging device that captures high-resolution 3D images of a hollow target internal body structure, e.g., the trachea, at very high frame rates, typically 1 MHz to 1000 GHz, and an AI module that analyzes and interprets the images in real time. The AI module can recognize patterns and features in the images and make predictions about the hollow internal body structure anatomy or disease status. The system can provide a detailed, up-to-date 3D images of the hollow internal body structure in real time and can be used for a variety of applications including diagnosis, surgery, and monitoring. The present disclosure in one aspect provides a system for real-time mapping of a hollow target body structure, comprising: an ultrafast imaging device configured to capture 3D images of the hollow target internal body structure at very high frame rates; an artificial intelligence (AI) module configured to analyze and interpret the 3D images in real time; and configured to display the 3D images and the AI predictions in real time. In one aspect the target body structure is a hollow organ such as the trachea. In another aspect the ultrafast imaging device is an ultra-fast (UF) swept-source optical coherence tomography (SS-OCT) device. In a further aspect the AI module is a machine learning model trained on a dataset of SS-OCT images and corresponding hollow target body structure anatomy or disease labels. In yet another aspect the display is configured to display the 3D images of the hollow target body structure and the AI predictions on the same device as the ultrafast imaging device. The system also may comprise a database configured to store the 3D images of the hollow target body structure and the AI predictions. In a further aspect of the disclosure the hollow target internal body structure is selected from the group consisting of internal body structures of the mouth and throat such as the trachea, lungs, large and small intestines, bladder, nasal and ear canals, blood vessels and arteries. The present disclosure also provides a method for real-time mapping of a hollow target body structure such as the trachea, comprising: capturing 3D images of the hollow target body structure at very high frame rates using an ultrafast optical coherence tomography (UF-OCT) imaging device; analyzing and interpreting the 3D image