US-12620153-B2 - Methods and apparatus for reconstruction, adjustment and display of medical images based on displacement vector field data generated by a trained machine learning process
Abstract
Systems and methods for reconstructing medical images based on motion estimation are disclosed. Measurement data from positron emission tomography (PET) measurement data, and modality measurement data from an anatomy modality, such as computed tomography (CT) data, is received from an image scanning system. A trained deep learning process is applied to the PET measurement data and the modality measurement data to generate displacement vector field (DVF) data characterizing motion between the PET measurement data and the modality measurement data. A modality image is reconstructed from the modality measurement data, and the modality image is adjusted based on the DVF data. A PET image is then reconstructed from the PET measurement data and the adjusted modality image, and the PET image is adjusted based on a computed inverse of the DVF data. The adjusted PET image and the modality image spatially match, and are displayed.
Inventors
- Joshua Schaefferkoetter
- James Hamill
Assignees
- SIEMENS MEDICAL SOLUTIONS USA, INC.
Dates
- Publication Date
- 20260505
- Application Date
- 20231103
Claims (20)
- 1 . A computer-implemented method comprising: generating displacement vector field (DVF) data based on applying a trained machine learning process to positron emission tomography (PET) measurement data and modality measurement data, wherein the DVF data characterizes offsets between the PET measurement data and the modality measurement data; reconstructing a modality image based on the modality measurement data; adjusting the modality image based on the DVF data; reconstructing a PET image based on the PET measurement data and the adjusted modality image; adjusting the PET image based on the DVF data; and providing the adjusted PET image for display.
- 2 . The computer-implemented method of claim 1 , further comprising: determining an inverse of the DVF data; and adjusting the PET image based on the inverse of the DVF data.
- 3 . The computer-implemented method of claim 1 , further comprising: generating an attenuation map based on the adjusted modality image; and reconstructing the PET image based on the attenuation map.
- 4 . The computer-implemented method of claim 1 , wherein adjusting the modality image based on the DVF data comprises resampling the modality image based on the DVF data.
- 5 . The computer-implemented method of claim 1 , further comprising: inputting the PET measurement data to a first convolutional neural network (CNN) of the trained machine learning process to generate first feature data; inputting the modality measurement data to a second CNN of the trained machine learning process to generate second feature data; and inputting the first feature data and the second feature data to a third CNN of the trained machine learning process to generate the DVF data.
- 6 . The computer-implemented method of claim 1 , further comprising: inputting the PET measurement data to a first convolutional neural network (CNN) of the trained machine learning process to generate first feature data; inputting the modality measurement data to the first CNN of the trained machine learning process to generate second feature data; and inputting the first feature data and the second feature data to a second CNN of the trained machine learning process to generate the DVF data.
- 7 . The computer-implemented method of claim 6 , wherein the DVF data characterizes offsets between corresponding features of the first feature data and the second feature data.
- 8 . The computer-implemented method of claim 1 , wherein the DVF data comprises a 3-dimensional vector for each of a plurality of pixels of the modality image, the 3-dimensional vectors characterizing motion between the modality image and the PET image.
- 9 . The computer-implemented method of claim 1 , further comprising receiving the PET measurement data and the modality measurement data from an image scanning system.
- 10 . The computer-implemented method of claim 1 , further comprising providing the modality image superimposed with the adjusted PET image for display.
- 11 . The computer-implemented method of claim 1 , further comprising: training the machine learning process based on a first PET dataset and a first modality data set; and validating the machine learning process based on a second PET dataset and a second modality dataset.
- 12 . The computer-implemented method of claim 11 , further comprising: determining a loss between output DVF data of the machine learning process and expected DVF data; and determining the machine learning process is validated when the loss is beyond a threshold.
- 13 . The computer-implemented method of claim 1 , wherein the modality measurement data is computed tomography (CT) measurement data.
- 14 . A non-transitory, computer readable medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising: generating displacement vector field (DVF) data based on applying a trained machine learning process to positron emission tomography (PET) measurement data and modality measurement data, wherein the DVF data characterizes offsets between the PET measurement data and the modality measurement data; reconstructing a modality image based on the modality measurement data; adjusting the modality image based on the DVF data; reconstructing a PET image based on the PET measurement data and the adjusted modality image; adjusting the PET image based on the DVF data; and providing the adjusted PET image for display.
- 15 . The non-transitory, computer readable medium of claim 14 storing instructions that, when executed by the at least one processor, further cause the at least one processor to perform operations comprising: determining an inverse of the DVF data; and adjusting the PET image based on the inverse of the DVF data.
- 16 . The non-transitory, computer readable medium of claim 14 storing instructions that, when executed by the at least one processor, further cause the at least one processor to perform operations comprising: generating an attenuation map based on the adjusted modality image; and reconstructing the PET image based on the attenuation map.
- 17 . The non-transitory computer readable medium of claim 14 storing instructions that, when executed by the at least one processor, further cause the at least one processor to perform operations comprising: inputting the PET measurement data to a first convolutional neural network (CNN) of the trained machine learning process to generate first feature data; inputting the modality measurement data to a second CNN of the trained machine learning process to generate second feature data; and inputting the first feature data and the second feature data to a third CNN of the trained machine learning process to generate the DVF data.
- 18 . A system comprising: a memory storing instructions; and at least one processor communicatively coupled to the memory and configured to execute the instructions to: generating displacement vector field (DVF) data based on applying a trained machine learning process to positron emission tomography (PET) measurement data and modality measurement data, wherein the DVF data characterizes offsets between the PET measurement data and the modality measurement data; reconstruct a modality image based on the modality measurement data; adjust the modality image based on the DVF data; reconstruct a PET image based on the PET measurement data and the adjusted modality image; adjust the PET image based on the DVF data; and provide the adjusted PET image for display.
- 19 . The system of claim 18 , wherein the at least one processor is configured to execute the instructions to: determine an inverse of the DVF data; and adjust the PET image based on the inverse of the DVF data.
- 20 . The system of claim 18 , wherein the at least one processor is configured to execute the instructions to: generate an attenuation map based on the adjusted modality image; and reconstruct the PET image based on the attenuation map.
Description
FIELD Aspects of the present disclosure relate in general to medical diagnostic systems and, more particularly, to reconstructing images from nuclear imaging systems for diagnostic and reporting purposes. BACKGROUND Nuclear imaging systems can employ various technologies to capture images. For example, some nuclear imaging systems employ positron emission tomography (PET) to capture images. PET is a nuclear medicine imaging technique that produces tomographic images representing the distribution of positron emitting isotopes within a body. Some nuclear imaging systems employ computed tomography (CT), for example, as a co-modality. CT is an imaging technique that uses x-rays to produce anatomical images. Some nuclear imaging systems combine images from PET and CT scanners during an image fusion process to produce images that show information from both a PET scan and a CT scan (e.g., PET/CT systems). Magnetic Resonance Imaging (MRI) is an imaging technique that uses magnetic fields and radio waves to generate anatomical and functional images. Typically, these nuclear imaging systems capture measurement data, and process the captured measurement data using mathematical algorithms to reconstruct medical images. For PET/CT systems, the CT measurement information can be used to correct the PET measurement data for attenuation (i.e., attenuation correction of the PET image). Similarly, some nuclear imaging systems combine images from PET and MRI scanners to produce images that show information from both a PET scan and an MRI scan. These conventional models, however, can have several drawbacks. For instance, subjects may move during the PET and CT scans, thereby causing misalignment between the PET and CT measurement data and leading to inaccurate attenuation correction. Moreover, many image formation processes employed by at least some of these systems rely on approximations to compensate for detection loss. The approximations, however, can cause inaccurate and lower quality medical images. As such, there are opportunities to address deficiencies in nuclear imaging systems. SUMMARY Systems and methods for inter-modality, elastic registration of images using deep learning-based processes for image alignment are disclosed. In some embodiments, a computer-implemented method includes receiving positron emission tomography (PET) measurement data from an image scanning system. The method also includes receiving modality measurement data from the image scanning system. Further, the method includes generating displacement vector field (DVF) data based on applying a machine learning process to the PET measurement data and the modality measurement data, wherein the DVF data characterizes offsets between the PET measurement data and the modality measurement data. The method also includes reconstructing a modality image based on the modality measurement data. The method further includes adjusting the modality image based on the DVF data. The method also includes reconstructing a PET image based on the PET measurement data and the adjusted modality image. Further, the method includes adjusting the PET image based on the DVF data. The method also includes providing the PET image for display. In some embodiments, a non-transitory computer readable medium stores instructions that, when executed by at least one processor, cause the at least one processor to perform operations including receiving PET measurement data from an image scanning system. The operations also include receiving modality measurement data from the image scanning system. Further, the operations include generating DVF data based on applying a machine learning process to the PET measurement data and the modality measurement data, wherein the DVF data characterizes offsets between the PET measurement data and the modality measurement data. The operations also include reconstructing a modality image based on the modality measurement data. The operations further include adjusting the modality image based on the DVF data. The operations also include reconstructing a PET image based on the PET measurement data and the adjusted modality image. Further, the operations include adjusting the PET image based on the DVF data. The operations also include providing the PET image for display. In some embodiments, a system includes a memory storing instructions, and at least one processor communicatively coupled the memory. The at least one processor is configured to execute the instructions to perform operations. The operations include receiving PET measurement data from an image scanning system. The operations also include receiving modality measurement data from the image scanning system. Further, the operations include generating DVF data based on applying a machine learning process to the PET measurement data and the modality measurement data, wherein the DVF data characterizes offsets between the PET measurement data and the modality measurement data. The operations also