Search

CN-121998816-A - CTP image rapid registration method based on deep learning model and GPU parallel computation

CN121998816ACN 121998816 ACN121998816 ACN 121998816ACN-121998816-A

Abstract

The application relates to the technical field of deep learning, and provides a CTP image rapid registration method based on a deep learning model and GPU parallel computation, which comprises the steps of adjusting the number of layers and the number of computing units of a three-dimensional deep learning network model, balancing the computing capacity and the operand of the model, and obtaining an optimized three-dimensional deep learning network model; and judging whether residual noise exists in the preliminary registration image sequence through gray variance detection, and if so, further iteratively optimizing the displacement vector according to the dense displacement field to obtain a final registration image sequence. By the technical scheme, the accuracy and the speed of medical image registration can be obviously improved, and reliable support is provided for clinical diagnosis and treatment planning.

Inventors

  • KE XIAOWEN
  • NIU DAOHENG
  • LIU XIA
  • LIU YAZHI

Assignees

  • 国药通用(深圳)医疗影像有限公司

Dates

Publication Date
20260508
Application Date
20251210

Claims (8)

  1. 1. The CTP image rapid registration method based on the deep learning model and the GPU parallel computation is characterized by comprising the following steps of: Acquiring a CTP image sequence, wherein the CTP image sequence comprises a reference image and a plurality of images to be registered, each image to be registered corresponds to one time phase scanning data, and a CTP image sequence set containing the reference image and the images to be registered in time phase mapping relation is obtained; Adjusting the number of layers and the number of calculation units of the three-dimensional deep learning network model, balancing the calculation capacity and the calculation amount of the model, and obtaining an optimized three-dimensional deep learning network model; Processing a CTP image sequence set by adopting the optimized three-dimensional deep learning network model, and extracting multi-scale features by splicing a reference image and an image to be registered in a channel dimension as input to obtain a multi-scale feature map containing difference information; Performing dimension reduction and high-level feature aggregation on the multi-scale feature map through the downsampling path of the optimized three-dimensional deep learning network model, calculating image gray level difference and position difference by combining a convolution contrast layer built in the network, generating depth representation, generating a dense displacement field according to the depth representation, and obtaining the dense displacement field containing initial three-dimensional displacement information of each voxel; determining a displacement vector of each voxel in a three-dimensional space according to the dense displacement field, wherein the displacement vector is used for guiding the voxels of the image to be registered to move to a position aligned with the reference image, and obtaining a voxel alignment rule; The method comprises the steps of adjusting and reasoning batch size parameters through a batch processing mechanism, processing a plurality of image pairs in a single reasoning by utilizing a parallel computing architecture, realizing double parallelism between the inside of an algorithm and the image pairs, completing preliminary alignment of an image to be registered and a reference image based on a voxel alignment rule, and obtaining a preliminary registration image sequence; judging whether residual noise exists in the preliminary registration image sequence through gray variance detection, and if so, further iteratively optimizing displacement vectors according to the dense displacement field to obtain a final registration image sequence.
  2. 2. The method for quickly registering CTP images based on parallel computation of a deep learning model and a GPU according to claim 1, wherein the steps of obtaining a CTP image sequence, the CTP image sequence including a reference image and a plurality of images to be registered, each image to be registered corresponding to one time phase scanning data, obtaining a CTP image sequence set including the reference image, the images to be registered and a timely phase mapping relationship, include: acquiring phase scanning data through scanning equipment, and acquiring a CTP image sequence, wherein the CTP image sequence comprises a reference image and a plurality of images to be registered, so as to obtain an initial image sequence; Aiming at an initial image sequence, median filtering is adopted as noise suppression filtering to process an image to be registered, and the median filtering is used for determining an image after noise suppression by sequencing pixel neighborhood values and taking a median to replace a center pixel; According to the image after noise suppression, histogram equalization is adopted as a contrast enhancement processing reference image, the histogram equalization adjusts pixel distribution through an accumulated distribution function, the pixel distribution similarity of the enhanced reference image and the image after noise suppression is calculated through cosine similarity, if the similarity is greater than or equal to a preset similarity threshold, a corresponding relation is judged, and a time phase mapping relation is obtained; and integrating images in the initial image sequence from the time phase mapping relationship to obtain a CTP image sequence set containing the reference image and the image to be registered and the time phase mapping relationship.
  3. 3. The CTP image rapid registration method based on parallel computation of a deep learning model and a GPU according to claim 1, wherein the steps of adjusting the number of layers and the number of computation units of the three-dimensional deep learning network model, balancing the computation capacity and the computation amount of the model, and obtaining the optimized three-dimensional deep learning network model include: Acquiring the initial layer number and the number of calculation units of a three-dimensional deep learning network model, adopting a convolutional neural network as a basic framework aiming at the initial layer number, and determining the current computing capability level by evaluating the synergistic effect of the layer number, the number of calculation units and the size of a convolutional kernel of the convolutional neural network; analyzing the operand distribution from the computing capacity level, and adjusting the number of computing units to balance the resource consumption, so as to obtain a primarily optimized network architecture; Monitoring performance indexes according to the preliminarily optimized network architecture, reconstructing a layer structure according to the monitoring result, judging the number of integrated units if the performance indexes exceed a preset threshold value, and obtaining balanced model parameters; And verifying the reasoning speed through the balanced model parameters to obtain the optimized three-dimensional deep learning network model.
  4. 4. The rapid CTP image registration method based on parallel computation of a deep learning model and a GPU according to claim 1, wherein the CTP image sequence set is processed by adopting an optimized three-dimensional deep learning network model, and multi-scale features are extracted by splicing a reference image and an image to be registered in a channel dimension as input, so as to obtain a multi-scale feature map containing difference information, and the method comprises the following steps: Acquiring a CTP image sequence set, selecting a reference image and an image to be registered from the sequence set, and splicing the reference image and the image to be registered through a channel dimension to obtain a spliced input sequence; inputting the optimized three-dimensional deep learning network model by adopting the spliced input sequence, extracting multi-scale features, and obtaining a preliminary multi-scale feature representation; for the primary multi-scale feature representation fusion difference information, determining a multi-scale feature map containing vascular perfusion differences by comparing pixel changes between sequences; Analyzing the corresponding relation among sequences from the multi-scale feature map containing vascular perfusion differences, judging that the feature scale is adjusted if the corresponding relation meets a preset threshold value, and obtaining an adjusted feature map; And verifying the integrity of the difference information through the adjusted feature map, and obtaining the consistency of pixel distribution to obtain a final multi-scale feature map.
  5. 5. The CTP image rapid registration method based on parallel computation of a deep learning model and a GPU according to claim 1, wherein the method is characterized in that the multi-scale feature map is subjected to dimension reduction and high-level feature aggregation through a downsampling path of an optimized three-dimensional deep learning network model, an image gray level difference and a position difference are calculated by combining a convolution contrast layer built in a network, a depth representation is generated, a dense displacement field is generated according to the depth representation, and the dense displacement field containing initial three-dimensional displacement information of each voxel is obtained, and comprises the following steps: acquiring downsampling path data from a multi-scale feature map, and aggregating high-level features through layer-by-layer convolution and dimension reduction to obtain an aggregated feature representation; aiming at the aggregated characteristic representation, calculating gray level difference and position difference by adopting a convolution contrast layer, wherein the input of the convolution contrast layer is the characteristic representation, and outputting a difference value to obtain a depth representation of difference fusion; generating a dense displacement field according to the depth representation of the difference fusion, and determining initial three-dimensional displacement information of each voxel; Analyzing a sequence registration relation through the dense displacement field, wherein the registration relation is calculated by comparing voxel correspondence, and if the registration relation meets a preset threshold value, voxel displacement is adjusted to obtain a corrected displacement field; And acquiring the corrected displacement field for perfusion analysis, wherein the perfusion analysis is judged by calculating hemodynamic parameters including cerebral blood flow and cerebral blood volume, and if the analysis results are consistent, the fusion images are spliced to obtain the dense displacement field containing voxel level deformation correction.
  6. 6. The CTP image rapid registration method based on parallel computation of a deep learning model and a GPU according to claim 1, wherein determining a displacement vector of each voxel in a three-dimensional space according to a dense displacement field, the displacement vector being used for guiding the voxels of the image to be registered to move to a position aligned with a reference image, obtaining a voxel alignment rule, comprises: obtaining a displacement vector of each voxel from the dense displacement field, and determining the initial position of the voxel in the three-dimensional space through the displacement vector; Aiming at the displacement vector, adopting a reference image coordinate comparison method to adjust the voxel coordinates of the image to be registered, calculating displacement deviation through coordinate comparison and correcting coordinate values to obtain an adjusted voxel moving path; Acquiring multi-sequence image fusion data according to the adjusted voxel moving path, and comparing and judging the registration relationship between the fusion data and the reference image through voxel positions in the fusion data; and generating a space alignment rule through the registration relation, and determining the alignment position of each voxel through applying the rule to the voxel positions to obtain the voxel alignment rule.
  7. 7. The CTP image rapid registration method based on a deep learning model and GPU parallel computation according to claim 1, wherein the method is characterized in that a batch processing mechanism is used for adjusting a reasoning batch size parameter, a parallel computing architecture is used for processing a plurality of image pairs in a single reasoning, dual parallelism between the inside of an algorithm and the image pairs is realized, preliminary alignment of an image to be registered and a reference image is completed based on a voxel alignment rule, and a preliminary registration image sequence is obtained, and the method comprises the following steps: dynamically adjusting the reasoning batch size parameters through a batch processing mechanism, and determining an adjustment proportion according to the image pair quantity comparison preset threshold value to obtain adjusted batch parameters; performing thread allocation parallel processing on convolution operation in a GPU thread block by utilizing a parallel computing architecture aiming at the adjusted batch parameters, and performing synchronous operation between image pairs through GPU streams to obtain a batch data parallel result; according to the batch data parallel result, a dual mechanism of algorithm internal convolution parallel and image pair batch parallel is realized, and fusion data of multiple pairs of image processing is determined; For the fusion data, carrying out coordinate mapping on each voxel position based on a voxel alignment rule, completing preliminary alignment of an image to be registered and a reference image, and obtaining a preliminary alignment image; and (3) performing reference image pixel value comparison correction through the preliminary alignment image to obtain a preliminary registration image sequence.
  8. 8. The CTP image rapid registration method based on parallel computation of a deep learning model and a GPU according to claim 1, wherein the determining whether residual noise exists in the preliminary registration image sequence through gray variance detection, if so, further iteratively optimizing displacement vectors according to a dense displacement field to obtain a final registration image sequence comprises: Carrying out noise existence judgment on the preliminary registration image sequence by combining gray variance calculation with noise type recognition, and obtaining a residual noise distribution map from gray value difference of each pixel point; Aiming at the residual noise distribution diagram, performing iterative optimization vectors according to the dense displacement field, and adjusting the displacement vectors point by point to obtain an adjusted displacement vector group; And aiming at the fused image level, performing sequential image alignment processing, and obtaining a final registration image sequence from pixel alignment correction.

Description

CTP image rapid registration method based on deep learning model and GPU parallel computation Technical Field The invention relates to the technical field of deep learning, in particular to a CTP image rapid registration method based on a deep learning model and GPU parallel computing. Background When processing brain perfusion imaging data, commercial CTP software registration is adopted for the non-rigid registration problem of time series images, but a unique and complex technical challenge exists in terms of how to simultaneously consider registration accuracy and calculation efficiency in high-dimensional time series data, particularly when processing large-scale patient data, the capturing capability of an algorithm on fine brain tissue deformation is ensured, and meanwhile, processing delay caused by calculation resource limitation is avoided. Specifically, the commercial CTP software registration adopts a rigid registration algorithm based on image gray scale, when CTP scanning, the concentration of contrast agent changes along with time, so that the pixel values of a Tn image and a T0 image have great difference, the registration accuracy of the image gray scale method is reduced, the head of a patient does not move independently during scanning, images in a time phase are misplaced, non-rigid elastic deformation is caused, rigid registration cannot completely correct non-rigid components, but the traditional non-rigid registration algorithm has long operation time and is not suitable for commercial use. The specific scene focuses on brain perfusion imaging data of hundreds of patients to be processed in a hospital image department every day, each patient comprises tens of time-phase three-dimensional images, the data volume is huge, the accuracy requirement on registration results is extremely high, and the accurate sub-voxel level is needed to assist doctors in judging brain blood flow abnormal areas. However, in the prior art, when processing such data, multiple contradictions are faced, namely, on one hand, a deep learning model is required to capture complex deformation through multi-scale feature extraction, but the calculation amount is increased greatly due to the increase of the model layer number, the single reasoning time is prolonged, and the clinical real-time requirement is difficult to meet, and on the other hand, if the model complexity is reduced to increase the speed, key deformation information is possibly lost, so that registration errors are increased, and especially in a brain tissue boundary region. In addition, although the introduction of the batch processing mechanism can improve throughput, how to dynamically adjust the batch size to adapt to different hardware resources, and at the same time avoid memory overflow or idle computing units becomes another difficulty. Furthermore, if there is still fine noise after registration, how to design an efficient iterative optimization mechanism to correct the displacement vector without significantly increasing the computational burden is also a challenge to be solved, and the CNN model based on deep learning is an automatic feature extractor, so that end-to-end non-rigid registration can be realized, and the influence of pixel variation due to the concentration of the contrast agent can be eliminated, thereby significantly improving the diagnosis efficiency and accuracy in a high-load scene of clinical image processing. Disclosure of Invention The invention provides a CTP image rapid registration method based on a deep learning model and GPU parallel computation, which mainly comprises the following steps: Acquiring a CTP image sequence, wherein the CTP image sequence comprises a reference image and a plurality of images to be registered, each image to be registered corresponds to one time phase scanning data, and a CTP image sequence set containing the reference image and the images to be registered in time phase mapping relation is obtained; Adjusting the number of layers and the number of calculation units of the three-dimensional deep learning network model, balancing the calculation capacity and the calculation amount of the model, and obtaining an optimized three-dimensional deep learning network model; Processing a CTP image sequence set by adopting the optimized three-dimensional deep learning network model, and extracting multi-scale features by splicing a reference image and an image to be registered in a channel dimension as input to obtain a multi-scale feature map containing difference information; Performing dimension reduction and high-level feature aggregation on the multi-scale feature map through the downsampling path of the optimized three-dimensional deep learning network model, calculating image gray level difference and position difference by combining a convolution contrast layer built in the network, generating depth representation, generating a dense displacement field according to the depth representation