Search

CN-122025037-A - CT image-based white matter fiber bundle reconstruction method for cerebral hemorrhage patient

CN122025037ACN 122025037 ACN122025037 ACN 122025037ACN-122025037-A

Abstract

The invention belongs to the technical field of medical image data processing, and particularly relates to a CT image-based white matter fiber bundle reconstruction method for a cerebral hemorrhage patient. The method comprises the steps of utilizing paired CT and dMRI data to manufacture training data, then adopting a segmentation model and a generation model to construct a reconstruction network, obtaining a trained reconstruction network after training, and utilizing the trained reconstruction network to reconstruct white matter fiber bundles. The invention provides a white matter fiber bundle reconstruction method for a cerebral hemorrhage patient based on a CT image, which can realize direct white matter fiber bundle reconstruction of the CT image without depending on dMRI and has a certain guiding effect on diagnosis and treatment scheme of clinical cerebral hemorrhage.

Inventors

  • ZHANG FAN
  • HUANG GUANLIN
  • Li Diejie

Assignees

  • 电子科技大学

Dates

Publication Date
20260512
Application Date
20260114

Claims (1)

  1. 1. The method for reconstructing the white matter fiber bundles of the cerebral hemorrhage patient based on the CT image is characterized by comprising the following steps of: S1, preparing and preprocessing a dataset, namely acquiring paired CT and dMRI data, preprocessing the acquired paired data, specifically performing brain region mask extraction and head movement correction on the dMRI data, performing brain region mask extraction on the CT data, and registering the dMRI data and the CT data to a standard MNI space by using an ANTs tool; s2, constructing a reconstruction network, wherein the reconstruction network comprises a segmentation model and a generation model; The segmentation model is based on SwinUNETR structures, inputs single-channel CT data, firstly performs preliminary feature extraction and downsampling on the data through a convolution layer of 3 x 3, maps the CT data of an original single channel into a high-dimensional feature map of 48 channels, provides token representation for subsequent transgers, then enters an encoder part through a network, wherein the encoder consists of 4 cascaded Swin Transformer blocks, extracts local features by adding a window self-attention mechanism, each block internally comprises a two-layer multi-layer perceptron (MLP) and residual error connection structure, and the number of feature channels corresponding to the 4 encoding blocks is 48, 96, 192 and 384 in sequence; In the encoding stage, the multi-scale features output by the encoder are transmitted to the corresponding decoding layers in a jump connection mode to realize the fusion of high-level semantic information and low-level space detail information, the decoder adopts a step-by-step up-sampling strategy, gradually restores the feature map size through transposition convolution, and simultaneously reduces the channel number by half layer; The generating model is based on cGAN structures and consists of a generator and a discriminator, wherein the generator is of a three-dimensional attention U-Net structure, CT data and a corresponding TOM mask are input, the CT data and the corresponding TOM mask are spliced in dimension to form conditional input data, the encoder part consists of 4 convolution layers which are sampled downwards step by step, each layer adopts a convolution kernel of 3 x 3, the step length is realized by convolution operation of 2, the channel numbers of the 4 encoding layers are 128, 256, 512 and 1024, the spatial resolution of a feature map is gradually reduced along with deepening of a network layer, the feature channel number is gradually increased to realize gradual abstract expression of the image from local detail to high-layer semantic information, the decoder part of the generator corresponds to the encoder part in the hierarchical structure, the spatial resolution of the feature map is restored through gradual upsampling, and in the decoding process, the attention gating mechanism is introduced into each level of the decoder to weight the features from the corresponding layer of the encoder; the discriminator is a PatchGAN three-dimensional discriminating network for carrying out antagonism constraint on a generated result, the discriminating network takes TOM data generated by a generator and TOM data generated by S1 as input, input features firstly pass through a first layer of three-dimensional convolution and carry out downsampling treatment to construct initial discriminating feature representation, then the features pass through a second layer of three-dimensional convolution discriminating unit and a third layer of three-dimensional convolution discriminating unit in sequence, each layer gradually expands receptive fields and enhances channel expression capacity in the downsampling process, the number of three layers of channels is 64, 128 and 256 in sequence, and the discriminating network can carry out feature modeling on a local space region of the input volume data through multi-layer cascade convolution and downsampling operation; S3, training the constructed reconstruction network, inputting the data set obtained in the S1 into a segmentation model, outputting a TOM mask, and inputting CT data and the TOM mask into a generation model network for training to obtain an optimal parameter model, so that a trained reconstruction network is obtained; A weighted composite loss function formed by the Dice loss and Tversky loss is adopted in the training process of the segmentation model, the difference between model output and a real mask is restrained, wherein weights of the Dice loss and the Tversky loss are respectively set to be 0.7 and 0.3, so that the model is guided together to optimize the segmentation result; the optimization target of the generation model consists of a weighted combination of reconstruction loss, perception loss and antagonism loss, wherein the reconstruction loss is used for restraining the consistency of the generation result and the real TOM at a voxel level, the perception loss is used for enhancing the similarity of high-level structural characteristics, and the antagonism loss is used for improving the overall authenticity of the generation result; S4, acquiring new CT data, and reconstructing white matter fiber bundles by using a trained reconstruction network, wherein the method comprises the steps of inputting the CT data into a segmentation model to obtain a TOM mask, inputting the TOM mask and the CT data into a generation model to obtain a TOM image, and finally reconstructing fiber bundles.

Description

CT image-based white matter fiber bundle reconstruction method for cerebral hemorrhage patient Technical Field The invention belongs to the technical field of medical image data processing, and particularly relates to a CT image-based white matter fiber bundle reconstruction method for a cerebral hemorrhage patient. Background Cerebral Hemorrhage (ICH) is a serious neurosurgical emergency with high disability and mortality rates, and its treatment is critical to prevent persistent bleeding and reduce neuronal damage. In the management of cerebral hemorrhage neurosurgery, it is important to accurately identify and evaluate critical white matter fiber bundles, such as the corticospinal bundles (CST), closely related to the motor functions of patients, and the evaluation of the integrity thereof has a critical influence on the formulation of the surgical scheme and the postoperative functional recovery of the patients. The fiber bundle imaging technology (Tractography) is an advanced imaging neuroimaging technology, and is the only method capable of noninvasively displaying the white matter fiber bundle structure and trend in the living human brain at present. Fiber bundle imaging is a three-dimensional trajectory that reconstructs white matter fiber pathways by tracking the diffusion direction of water molecules, traditional methods are to estimate fiber travel using diffusion magnetic resonance (dMRI) data, and techniques employed include classical diffusion tensor models and advanced methods such as Constrained Sphere Deconvolution (CSD), multi-fiber models, global fiber bundle imaging, and the like. Conventional methods typically require a significant amount of time to calculate due to the complex model. In order to overcome this problem, in recent years, with the development of deep learning technology, there has been a method of performing fiber bundle imaging based on deep learning, such as TractSeg, which is one of the most commonly used methods at present, not using a model fitting method, but using a proposed estimated fiber bundle direction map (TOM) using a neural network, which provides a direction map of a specific fiber bundle, so that a white fiber bundle can be reconstructed. However, although the method realizes rapid and accurate fiber bundle reconstruction, the dependence on the diffusion magnetic resonance data limits the application of the method in emergency diagnosis such as cerebral hemorrhage and the like in consideration of long diffusion magnetic resonance scanning time and high cost. There have been studies on the reconstruction of fiber bundles in neural imaging of other non-dMRI modalities, such as on T1-WEIGHTED MRI and FLAIR MRI via Recurrent Neural Networks (RNNs), which demonstrate the possibility of fiber bundle imaging of non-dMRI modalities. The reconstruction of white matter fiber bundles on CT images of cerebral hemorrhage patients has the following problems that (1) the soft tissue contrast of CT images is low and white matter passages cannot be fully distinguished, (2) the occupation effect of acute hemorrhage and peripheral edema of cerebral hemorrhage patients can influence the neuroanatomy structure, and (3) paired CT-dMRI data required by training deep learning are limited. At present, no solution is proposed for reconstructing white matter fiber bundles based on CT images, and studies on predicting CST in CT scanning of cerebral hemorrhage patients are carried out at present, but the output is only limited to probability diagrams, and specific white matter fiber bundle paths cannot be reconstructed. Disclosure of Invention Aiming at the problems, the invention aims to provide a white matter fiber bundle reconstruction method for a cerebral hemorrhage patient based on CT images. Aiming at the problem of lack of dMRI data in clinical diagnosis of cerebral hemorrhage, a deep learning generation model is provided, and a method for reconstructing white matter fiber bundles directly from CT without dMRI data is realized. The technical scheme of the invention is as follows: A method for reconstructing white matter fiber bundles of a cerebral hemorrhage patient based on CT images comprises the following steps: S1, preparing and preprocessing a dataset, namely acquiring paired CT and dMRI data, preprocessing the acquired paired data, specifically performing brain region mask extraction and head movement correction on the dMRI data, performing brain region mask extraction on the CT data, and registering the dMRI data and the CT data to a standard MNI space by using an ANTs tool; s2, constructing a reconstruction network, wherein the reconstruction network comprises a segmentation model and a generation model; The segmentation model is based on SwinUNETR structures, inputs single-channel CT data, firstly performs preliminary feature extraction and downsampling on the data through a convolution layer of 3 x 3, maps the CT data of an original single channel into a high-dimensi