JP-7855375-B2 - Medical image processing method, medical image processing apparatus, X-ray CT apparatus, and medical image processing program
Inventors
- ツ-チェン・リー
- リヤン・ツァイ
- ジエン・ジョウ
- ジョウ・ユウ
- 松浦 正和
- 根本 拓也
- 田口 博基
Assignees
- キヤノンメディカルシステムズ株式会社
Dates
- Publication Date
- 20260508
- Application Date
- 20220322
- Priority Date
- 20210407
Claims (13)
- A first CT apparatus having a detector of a first pixel size is used to perform a first CT scan on a subject using a first imaging region of the detector, thereby acquiring a first set of projection data. A first CT image having a first resolution is obtained by reconstructing the first group of projection data. By applying a machine learning model to improve the resolution of the first CT image, a processed CT image with a higher resolution than the first resolution is obtained. The processed CT image is output for display or analysis. The machine learning model is obtained by machine learning using a downsampled image obtained by downsampling a second CT image, which is obtained by performing a second CT scan on a subject using a second imaging region smaller than the first imaging region of a detector in a second CT apparatus having a detector with a second pixel size smaller than the first pixel size, using the second imaging region of the detector, to the first pixel size, and the second CT image . A medical image processing method comprising the following.
- In acquiring the processed CT image, the processed CT image is acquired by combining the first CT image and an image obtained by applying the machine learning model to the first CT image in a predetermined ratio. The medical image processing method according to claim 1 .
- The predetermined ratio is obtained based on user input or from a set of imaging conditions. The medical image processing method according to claim 2 .
- In applying the aforementioned machine learning model, Multiple 3D partial images are generated based on the first CT image. By inputting the multiple 3D partial images into the machine learning model , multiple processed 3D partial images are obtained. A processed image is obtained by combining the aforementioned multiple processed 3D partial images. The medical image processing method according to claim 1 .
- In generating the plurality of 3D partial images, the plurality of 3D partial images are generated such that at least two of the plurality of 3D partial images partially overlap. The medical image processing method according to claim 4 .
- In synthesizing the multiple processed 3D partial images, a filter is applied to the joint between two adjacent processed 3D partial images to synthesize the multiple processed 3D partial images. The medical image processing method according to claim 4 .
- The aforementioned machine learning model is a machine learning model for applying super-resolution processing to the first CT image. A medical image processing method according to any one of claims 1 to 6 .
- The machine learning model is a machine learning model for applying super-resolution processing and noise reduction processing to the first CT image. A medical image processing method according to any one of claims 1 to 7 .
- In the generation of the machine learning model, the machine learning model is trained using the training images, which are the second CT image and a third CT image generated based on either the second CT image or the second projection data set, having lower resolution and greater noise than the second CT image. A medical image processing method according to any one of claims 1 to 8 .
- In the generation of the machine learning model, the second CT image and a fourth CT image based on a third projection data set obtained by applying noise addition processing and further resolution reduction processing to the second projection data set are used as training images, and the machine learning model is trained using the training images. The medical image processing method according to claim 9 .
- A first CT apparatus having a detector of a first pixel size, an acquisition unit that acquires a first group of projection data obtained by performing a first CT scan on a subject using a first imaging region of the detector, A processing unit that obtains a first CT image having a first resolution by reconstructing the first group of projection data, and obtains a processed CT image with a resolution higher than the first resolution by applying a machine learning model to improve the resolution of the first CT image, An output unit that outputs the processed CT image for display or analysis processing, Equipped with, The machine learning model is obtained by machine learning using a second CT image obtained by downsampling a second CT image, which is obtained by performing a second CT scan on a subject using a second imaging region smaller than the first imaging region of the detector in a second CT apparatus having a detector with a second pixel size smaller than the first pixel size, and the second projection data group obtained by downsampling the second CT image to the first pixel size, and the second CT image . Medical image processing equipment.
- Claim 1: An X-ray CT apparatus having the medical image processing apparatus described in Claim 1 .
- On the computer, A first CT apparatus having a detector of a first pixel size is used to perform a first CT scan on a subject using a first imaging region of the detector, thereby acquiring a first set of projection data. A first CT image having a first resolution is obtained by reconstructing the first group of projection data. By applying a machine learning model to improve the resolution of the first CT image, a processed CT image with a higher resolution than the first resolution is obtained. The processed CT image is output for display or analysis. To make it happen, The machine learning model is obtained by machine learning using a second CT image obtained by downsampling a second CT image, which is obtained by performing a second CT scan on a subject using a second imaging region smaller than the first imaging region of the detector in a second CT apparatus having a detector with a second pixel size smaller than the first pixel size, and the second projection data group obtained by downsampling the second CT image to the first pixel size, and the second CT image . Medical image processing program.
Description
This disclosure relates generally to the fields of medical image processing and diagnostic imaging, and more particularly to improving the spatial resolution of computed tomography (CT) images using deep learning models. Computed tomography (CT) detectors have advanced significantly in terms of scanning range and spatial resolution, achieving a wide detection range with a small detection element size. One of the advantages of wide-area CT detection systems is the expanded scanning range. This enables faster scanning and dynamic imaging of organs, including the heart and brain. Wide-area CT detection systems shorten scan times and eliminate the need for multiple data acquisitions by extending the scanning range per rotation. Using a wide-area CT detection system, it may be possible to acquire scans of the entire heart, the neonatal chest, and even the feet and ankles with high uniformity along the Z-axis and low radiation dose in just a single, instantaneous rotation. On the other hand, high spatial resolution CT systems provide diagnostic images that can be improved in areas such as tumor classification and disease diagnosis. However, even if wide-range ultra-high-resolution (UHR) CT detection systems are commercially available, their system costs are high, and problems related to complex signal processing and image reconstruction can arise. While wide-range ultra-high-resolution CT detection systems offer advantages such as a wider scanning range and higher resolution, in a commercial setting, the disadvantages of high cost and complexity may outweigh the advantages. Super-resolution (SR) technology is a technique that improves the resolution of an imaging system. SR improves the resolution of an imaging system by restoring high-resolution information from low-resolution images. SR algorithms fall into four categories: predictive model-based models, edge-based models, image statistics-based models, and example-based models. In this field, there is a demand for deep convolutional neural network (DCNN)-based SR methods that can achieve superior image quality and faster processing speeds compared to conventional methods. U.S. Patent Application Publication No. 2013/051519 Figure 1A is a diagram showing an overview of the entire process of the embodiments illustrated in this disclosure.Figure 1B is a diagram illustrating an overview of a hardware system used in the training and inference phases of a machine learning model, based on one or more aspects of this disclosure.Figure 2 shows a workflow for creating data to acquire and fine-tune a trained deep machine learning model (DCNN) based on one or more aspects of this disclosure.Figure 3 is a flowchart for approximating a wide-area UHR-CT image based on one or more aspects of this disclosure.Figure 4 is a block diagram showing a learning framework for obtaining an optimized pre-trained DCNN model based on one or more aspects of this disclosure.Figure 5A shows an example of a DL network, which is a feedforward artificial neural network (ANN), based on one embodiment.Figure 5B shows an example of a DL network, which is a convolutional neural network (CNN), based on one embodiment.Figure 5C shows an example of realizing a convolutional layer in one neuron node of a convolutional layer, based on one embodiment.Figure 5D shows an example of realizing a 3-channel volume convolutional layer for volume image data based on one embodiment.Figure 6 is a flowchart showing the procedure of a second embodiment for approximating a wide-area UHR-CT image based on one or more aspects of the present disclosure.Figure 7 is a flowchart showing the procedure of a third embodiment for approximating a wide-area UHR-CT image based on one or more aspects of the present disclosure.Figure 8 is a flowchart showing the procedure of a fourth embodiment for approximating a wide-area UHR-CT image based on one or more aspects of the present disclosure.Figure 9 shows a fifth embodiment of a workflow for creating data to acquire and fine-tune a trained deep machine learning model (DCNN) based on one or more aspects of the present disclosure.Figure 10 is a flowchart of a fifth embodiment for obtaining a DCNN-applicable image that approximates a wide-area UHR-CT image, based on one or more aspects of the present disclosure.Figure 11 is a schematic diagram showing an embodiment of a computer available with one or more embodiments of at least one apparatus, system, method and/or storage medium for generating, optimizing and applying a model to generate a DCNN-applicable image that is very similar to or approximates a wide-range UHR-CT image.Figure 12 is a schematic diagram showing an embodiment of a computer available with one or more embodiments of at least one apparatus, system, method and/or storage medium for generating, optimizing and applying a model to generate a DCNN-applicable image that is very similar to or approximates a wide-range UHR-CT image.Figure 13 shows a method for generating a