CN-122023595-A - CT reconstruction method and system based on data rearrangement and deep learning angle extrapolation
Abstract
The invention relates to the technical field of computer tomography, in particular to a CT reconstruction method and a CT reconstruction system based on data rearrangement and depth learning angle extrapolation, wherein the method comprises the steps of data rearrangement, namely, arranging actual ray source sampling points and detector pixels oppositely to construct virtual scan geometry, and recombining truncated projection sets into global projections; the method comprises the steps of deep learning angle extrapolation, image reconstruction, image post-processing, namely inputting an initial reconstructed image into a two-domain iterative optimization network, and realizing end-to-end collaborative training through projection domain-image domain joint loss. The invention effectively solves the problems of artifacts and structural distortion generated when the image is reconstructed under the conditions of limited angle and truncated projection in the traditional method, and remarkably improves the imaging quality and reliability of STCT systems.
Inventors
- LI LEI
- GUAN YANYU
- HAN YU
- YAN BIN
- ZHU LINLIN
- XI XIAOQI
- WANG CHUNHUI
- TAN SIYU
- SUN YANMIN
Assignees
- 中国人民解放军网络空间部队信息工程大学
Dates
- Publication Date
- 20260512
- Application Date
- 20251225
Claims (10)
- 1. The CT reconstruction method based on data rearrangement and deep learning angle extrapolation is characterized by comprising the following steps of: Step 1, STCT source translation scanning, namely controlling at least three X-ray sources to be arranged according to a specific distance and an inclined angle, wherein a single high-sensitivity area array detector is fixedly arranged on the other side, the X-ray sources move in a translation mode along a linear track, projection data are collected at a plurality of sampling positions, and a truncated projection set under a limited angle is formed; Step 2, data rearrangement, namely, arranging actual ray source sampling points and detector pixels oppositely to construct virtual scanning geometry, and recombining the truncated projection set into global projection; Step 3, depth learning angle extrapolation, namely inputting rearranged limited angle global projection into a physical perception transducer network, predicting to obtain projection data of a missing angle, and forming a complete full-angle projection set; Step 4, image reconstruction, namely performing filtering back projection reconstruction on the complete full-angle projection set to obtain an initial reconstructed image; and 5, performing image post-processing, namely inputting the initial reconstructed image into a double-domain iterative optimization network, and iteratively outputting a final reconstructed image through image domain optimization, projection domain optimization and projection domain-image domain double-domain consistency constraint.
- 2. The method of CT reconstruction based on data rebinning and deep learning angle extrapolation according to claim 1 wherein in step 1 the scanned geometry includes source spacing distance s, detector length d, source-to-object distance l, and object-to-detector distance h.
- 3. The CT reconstruction method based on data rearrangement and deep learning angle extrapolation according to claim 1, wherein the data rearrangement in step 2 specifically comprises: Mapping an actual ray source sampling point sequence { S n -n=1, 2..N } to a virtual detector array, mapping an actual detector pixel sequence { D m -m=1, 2..M } to a virtual ray source sampling point sequence { D m -m=1, 2..M } where N is the ray source sampling point number and M is the detector pixel number, and mapping the actual ray source sampling point S n to a virtual pixel Mapping the actual detector pixel D m to a virtual source sampling point The original truncated projection data R (N, M) is recombined into global projection data R (m+1-M, n+1-N) according to the X-ray attenuation characteristics.
- 4. The method of claim 1, wherein the physical sensing transducer network in step 3 employs an encoder-decoder architecture, the encoder comprises 6 transducer layers, each layer is composed of a multi-head self-attention mechanism and a feedforward neural network, the multi-head self-attention mechanism employs 8 attention heads, and the decoder also comprises 6 transducer layers, and employs a cross-attention mechanism and an autoregressive prediction strategy.
- 5. The method of CT reconstruction based on data reordering and depth learning angle extrapolation of claim 4 wherein the encoder comprises the following specialized modules: The geometric perception two-dimensional position coding module is used for coding the equivalent scanning angle theta of the rearranged projection data and the detector offset u into a position embedding vector P (theta, u) and displaying geometrical parameters of the embedding system; the direction selective frequency domain enhancement module is arranged at the tail of the encoder, performs Fourier transform on the angle dimension of projection data only, and dynamically generates a tangential high-frequency compensation mask based on limited angle coverage; And after being inserted into a third layer of the encoder, the Radon consistency sensing attention module extracts and generates space and channel attention weights through double-path characteristics, generates consistency correction factors through lightweight forward projection verification, and dynamically adjusts final attention weights.
- 6. The CT reconstruction method based on data rearrangement and deep learning angle extrapolation according to claim 5, wherein the Radon consistency sensing attention module works as follows: Convolving the input features through a spatial path to generate a spatial attention weight graph; Carrying out global average pooling and full-connection operation on input features through a channel path to generate a channel attention weight graph; Multiplying the spatial attention weight map by the channel attention weight map to obtain a comprehensive attention weight map; carrying out local image reconstruction on the current characteristics, executing forward projection, calculating residual errors between the current characteristics and a known projection area, and generating a consistency correction factor; and modulating the comprehensive attention weight graph by using the consistency correction factor, multiplying the modulated weight graph with the original characteristic, and outputting the enhanced characteristic.
- 7. The CT reconstruction method based on data rearrangement and deep learning angle extrapolation according to claim 4, wherein in the cross attention mechanism of the decoder, the query vector is from the state of the decoder, the key vector and the value vector are from the output of the encoder, the middle layer of the decoder is embedded with a physical projection constraint operator for forcing the projection data generated by the network to meet the line integral physical model of Radon transformation, the network adopts an autoregressive prediction strategy to gradually generate projection data of missing angles, and finally outputs 180-degree full-angle projection data.
- 8. The CT reconstruction method based on data rearrangement and deep learning angle extrapolation according to claim 1, wherein the two-domain iterative optimization network in step 5 is a multi-scale feature fusion network MSF-Net, comprising: (a) The multi-scale feature extraction module comprises three parallel convolution branches, corresponding detail sharpening, streak artifact suppression and low-frequency distortion restoration are respectively adopted by 3×3, 5×5 and 7×7 convolution kernels, and each branch output is subjected to self-adaptive fusion through gating weights generated based on an initial image gradient histogram; (b) And when the pixel gradient amplitude exceeds a preset edge threshold value, reducing regularization constraint to protect the image edge, and otherwise, enhancing constraint to inhibit noise.
- 9. The method of CT reconstruction based on data rearrangement and deep learning angle extrapolation of claim 8, wherein steps 3-5 employ an end-to-end joint training strategy with a total loss function L total of L total =L proj +L img +L cons , where L proj is a projection domain loss for measuring differences between projection data predicted by the network at the missing angle and real projection data at the same missing angle, L img is an image domain loss, a difference loss between reconstructed image and real image including MSF-Net output, and a differentiable adaptive total variation regularization term, L cons is a dual domain consistency loss expressed as Where P dense is the complete projection set containing the original and extrapolated data, f opt is the optimized reconstructed image, For the forward projection operator, And simultaneously optimizing the physical perception converter network and the MSF-Net through back propagation.
- 10. A CT reconstruction system based on data rebinning and depth learning angle extrapolation for performing the CT reconstruction method of any one of claims 1-9, comprising: A scanning device, configured with at least three X-ray sources arranged at a specific distance and an inclination angle and a high-sensitivity area array detector, for performing the STCT source translation scan; the data processing unit is used for executing the data rearrangement, the deep learning angle extrapolation, the image reconstruction and the image post-processing; And the control unit is connected with the scanning device and the data processing unit and is used for coordinating the scanning device and the data processing unit to realize 'motion-pause-acquisition' circulation control.
Description
CT reconstruction method and system based on data rearrangement and deep learning angle extrapolation Technical Field The invention relates to the technical field of Computer Tomography (CT), in particular to a CT reconstruction method and system based on data rearrangement and depth learning angle extrapolation. Background Traditional CT scanning is limited by the detector field of view (FOV), making it difficult to fully image large-sized objects. STCT by controlling the X-ray source to translate along a linear trajectory, projection data is acquired at multiple locations, which can effectively expand the effective FOV. However, this approach results in each projection being locally truncated and, due to radiation dose or mechanical limitations, it is often only possible to acquire projections over a limited angular range (e.g., ±60°), resulting in severe streak artifacts and structural deletions. The existing method such as rFBP can handle truncation but relies on full-angle coverage, while the deep learning method can be used for angle extrapolation, the general GAN or a Transformer network is mostly directly applied, and the unique geometrical structure, projection physical model and artifact forming mechanism of STCT are not considered, so that extrapolation results violate Radon transformation consistency, and non-physical solutions or detail distortion exist in reconstructed images. Therefore, there is a need for a reconstruction framework for depth fusion CT imaging physical priors and depth learning capabilities, with end-to-end optimization for the compound challenges (truncation + limited angle) of STCT. Disclosure of Invention Aiming at the problem of image artifact under limited angle scanning, the invention provides a CT reconstruction method and a CT reconstruction system based on data rearrangement and depth learning angle extrapolation, which combine Source-Translation CT (STCT), data rearrangement, physical guided depth learning angle extrapolation and double-domain collaborative optimization to realize high-quality three-dimensional image reconstruction under the conditions of limited angle and truncated projection. In order to achieve the above purpose, the technical scheme adopted is as follows: The invention provides a CT reconstruction method based on data rearrangement and deep learning angle extrapolation, which comprises the following steps: Step 1, STCT source translation scanning, namely controlling at least three X-ray sources to be arranged according to a specific distance and an inclined angle, wherein a single high-sensitivity area array detector is fixedly arranged on the other side, the X-ray sources move in a translation mode along a linear track, projection data are collected at a plurality of sampling positions, and a truncated projection set under a limited angle is formed; Step 2, data rearrangement, namely, arranging actual ray source sampling points and detector pixels oppositely to construct virtual scanning geometry, and recombining the truncated projection set into global projection; Step 3, depth learning angle extrapolation, namely inputting rearranged limited angle global projection into a physical perception transducer network, predicting to obtain projection data of a missing angle, and forming a complete full-angle projection set; Step 4, image reconstruction, namely performing filtering back projection reconstruction on the complete full-angle projection set to obtain an initial reconstructed image; and 5, performing image post-processing, namely inputting the initial reconstructed image into a double-domain iterative optimization network, and iteratively outputting a final reconstructed image through image domain optimization, projection domain optimization and projection domain-image domain double-domain consistency constraint. According to the CT reconstruction method based on data rearrangement and depth learning angle extrapolation, in step 1, the geometric parameters of scanning include a radiation source spacing distance s, a detector length d, a source-to-object distance l and an object-to-detector distance h. According to the CT reconstruction method based on data rearrangement and depth learning angle extrapolation, the data rearrangement in the step 2 specifically comprises the following steps: Mapping an actual ray source sampling point sequence { S n -n=1, 2..N } to a virtual detector array, mapping an actual detector pixel sequence { D m -m=1, 2..M } to a virtual ray source sampling point sequence { D m -m=1, 2..M } where N is the ray source sampling point number and M is the detector pixel number, and mapping the actual ray source sampling point S n to a virtual pixel Mapping the actual detector pixel D m to a virtual source sampling point The original truncated projection data R (N, M) is recombined into global projection data R (m+1-M, n+1-N) according to the X-ray attenuation characteristics. According to the CT reconstruction method base