Search

CN-122000027-A - Multi-mode image analysis method and system for cancer diagnosis

CN122000027ACN 122000027 ACN122000027 ACN 122000027ACN-122000027-A

Abstract

The invention discloses a multi-mode image analysis method and system for cancer diagnosis, comprising the following steps of repairing an original projection sequence through a pre-trained self-supervision U-Net network to generate a repaired projection sequence, carrying out multi-mode image space-time alignment and metabolic dynamics modeling on the basis of the repaired projection sequence and a preoperative image to generate a registration image and an intraoperative metabolic distribution map, generating an initial boundary map with a biological reasonable topological structure and a corresponding confidence map and an uncertainty map, positioning a high uncertainty area through the integration of the uncertainty map, feeding back to a repair process of the first step and a joint optimization network of the third step to carry out targeted adjustment until uncertainty meets preset conditions, and carrying out topological integrity verification on the initial boundary map to generate a diagnosis report comprising spatial positioning excision suggestions and confidence grading.

Inventors

  • LIU LIANG
  • DONG WEIJIA
  • LUO CHUAN
  • TANG XIFEI
  • DUAN JIAHUI
  • LIU TING
  • Tan Daiqi
  • CHENG HUI
  • LI BIN
  • YAN JINTAI
  • ZHANG YUANXIANG
  • YIN QING

Assignees

  • 川北医学院

Dates

Publication Date
20260508
Application Date
20260126

Claims (8)

  1. 1. A multi-modal image analysis method for cancer diagnosis, comprising the steps of: Collecting a two-dimensional projection sequence acquired by a gamma camera in operation as an original projection sequence, and repairing the original projection sequence through a pre-trained self-supervision U-Net network to generate a repaired projection sequence; performing multi-mode image space-time alignment and metabolic dynamics modeling based on the repaired projection sequence and the preoperative image to generate a registration image and an intraoperative metabolic distribution map; Processing the generated registration image and the voxel field reconstructed based on the restored projection by utilizing a joint optimization network embedded with micro-topology constraints, and synchronously generating an initial boundary map with a biological reasonable topological structure, a corresponding confidence map and an uncertainty map; Step four, implementing multi-round active refinement based on uncertainty propagation, positioning a high uncertainty area by integrating an uncertainty graph, and feeding back to the repair process of the step one and the joint optimization network of the step three for targeted adjustment until the uncertainty meets a preset condition; And fifthly, carrying out topology integrity verification on the initial boundary map, and generating a diagnosis report containing space positioning excision suggestions and confidence level classification by combining the adjustment result and clinical information of the step four.
  2. 2. The method of claim 1, wherein the step of repairing the original projection sequence comprises: acquiring an original two-dimensional projection sequence acquired by an intraoperative gamma camera within a preset rotation angle range; inputting the original two-dimensional projection sequence into a pre-trained self-supervision projection repair network, wherein the pre-trained self-supervision projection repair network adopts an encoder-decoder architecture, particularly a U-Net network structure, and is obtained by pre-training projection data in a large number of non-labeling operations, and the training target is learning to recover complete projection data from degraded projections containing noise and angle loss; and processing the input original two-dimensional projection sequence through the self-supervision projection repair network, and directly outputting the repaired projection sequence.
  3. 3. The method according to claim 2, wherein the multi-modal image space-time alignment and the metabolic dynamics modeling in the second step comprise: Spatially aligning the preoperative CT or MRI image with an initial three-dimensional radioactive distribution voxel field generated by the post-repair projection sequence; establishing a partial differential equation model for describing the diffusion and metabolism of the tracer, and carrying out inversion optimization by utilizing SUV distribution parameters in preoperative images to obtain patient-specific metabolic kinetic parameters; And performing time domain extrapolation based on the dynamic parameters, predicting and generating a metabolic distribution map at the moment in the operation, and completing dynamic fusion registration of preoperative anatomical information and intraoperative functional information.
  4. 4. The multi-modal image analysis method for cancer diagnosis according to claim 1, wherein the joint optimization network embedded with micro-topology constraints specifically comprises: The method comprises the steps of inputting data preparation and preprocessing, namely obtaining a correction voxel field generated by iterative reconstruction based on a repaired projection sequence and a registration image subjected to space-time alignment and metabolic dynamics modeling processing, and carrying out data standardization and channel alignment processing on the correction voxel field and the registration image to form a double-channel three-dimensional data block which can be directly processed by a neural network; Inputting the preprocessed correction voxel field and the registration image into two independent encoder paths of a three-dimensional segmentation network respectively, wherein each encoder path consists of a plurality of cascaded convolution layers, normalization layers and nonlinear activation layers, extracting multi-scale depth features from functional images and anatomic images, and merging the different scale features from the dual paths step by step in a network decoder part through cross-path feature connection and up-sampling operation to generate a fusion feature map containing multi-mode information; In the training stage of the network, intercepting tensors for topology analysis from a fusion feature map or a specific middle feature map, constructing a micro-sustainable coherent layer, receiving the tensors by the micro-sustainable coherent layer, defining the tensors as a filtering function on three-dimensional space complex, and internally executing micro-operation in the micro-sustainable coherent layer, wherein the micro-sustainable coherent layer comprises the steps of constructing a boundary matrix, carrying out matrix decomposition to calculate a continuous coherent interval, screening out topology features with the duration longer than a learnable threshold value, and calculating a topology regularization loss based on the screened topology features; The multi-task joint optimization and parameter learning, wherein the multi-task joint loss function comprises standard segmentation loss calculated based on a segmentation probability map and potential labels finally output by a network, topological regularization loss generated by a micro-sustainable coherent layer and an auxiliary confidence prediction loss, the standard segmentation loss, the topological regularization loss and the confidence prediction loss are weighted and summed, and all parameters of the three-dimensional segmentation network are jointly optimized end to end through a back propagation algorithm of the network, wherein the parameters comprise convolution weights of an encoder-decoder and a learnable threshold parameter used for filtering topological characteristics in the micro-sustainable coherent layer.
  5. 5. The method according to claim 1, wherein the active refinement based on uncertainty propagation in the fourth step comprises: Using a Monte Carlo dropouout method in a joint optimization network, and generating an uncertainty quantization chart of a voxel level through forward reasoning and variance calculation for a plurality of times; Analyzing the uncertainty quantitative graph, identifying three-dimensional space regions with uncertainty higher than a preset threshold value, and defining the regions as high uncertainty regions; and feeding back the coordinate information of the high uncertainty areas to an intraoperative gamma camera control system, and guiding a gamma camera to carry out targeted supplementary sampling of projection angles on the areas in a subsequent acquisition period.
  6. 6. The method according to claim 5, wherein the high uncertainty region identified in step four is used to generate a spatial attention mask, and the spatial attention mask is fed back to a joint optimization network, and the joint optimization network assigns higher weight to the feature calculation of the high uncertainty region according to the spatial attention mask in iterative calculation.
  7. 7. A multi-modality medical image analysis system for cancer diagnosis, characterized by being adapted to implement the method of claim 1, comprising: The self-supervision projection enhancement module is used for carrying out a pre-training self-supervision learning network repair operation on a two-dimensional projection sequence; The space-time alignment and metabolism modeling module is used for carrying out multi-mode image registration and metabolism dynamics extrapolation; the topology constraint joint optimization module comprises a segmentation network which comprises a micro-topology layer and is used for generating a boundary map and an uncertainty map; the active refinement control module is used for fusing the uncertainty information of the whole process and making a feedback control decision to drive the imaging equipment to resample and adjust algorithm parameters; The diagnostic report generation module is used for verifying the integrity of the boundary topology and synthesizing a final diagnostic report.
  8. 8. The system of claim 7, wherein the active refinement control module specifically comprises: the uncertainty fusion unit is used for receiving and integrating the model uncertainty graph from the topology constraint joint optimization module and error estimation information from other modules; The decision unit is used for judging whether the iteration termination condition is met or not based on the fused uncertainty space distribution, and generating a guide signal containing high uncertainty region coordinates if iteration needs to be continued; and the execution unit is used for respectively sending the guide signals to the imaging equipment to control the imaging equipment to perform targeted angle re-acquisition, and sending the guide signals to the topological constraint joint optimization module to adjust the spatial attention weight in the network.

Description

Multi-mode image analysis method and system for cancer diagnosis Technical Field The invention relates to the technical field of cancer diagnosis, in particular to a multi-mode image analysis method and system for cancer diagnosis. Background The cancer is used as one of the main causes of death in the world and is challenged by the fact that the traditional single-mode images (such as CT, MRI, PET and the like) in early accurate diagnosis, curative effect assessment and recurrence monitoring have information unilateral performance, resolution limitation, insufficient tumor heterogeneity identification and the like, the complementary information of various image modes (such as structural images, functional images, molecular images and metabolic images) is integrated, the feature extraction, fusion modeling and intelligent analysis are carried out on multi-source image data by combining an artificial intelligent algorithm, the accurate sketching of tumor boundaries, pathological subtype classification, gene mutation prediction, micro-environment assessment and dynamic monitoring of treatment response are realized, the sensitivity, specificity and the interpretability of cancer diagnosis are improved, the single-mode limitation is broken through multi-dimensional image information fusion, missed diagnosis is reduced, individual treatment scheme selection is optimized to improve the survival quality and prognosis of patients, the cancer diagnosis and diagnosis mode is further promoted to change from 'experience driving' to 'data driving', the deep fusion of accurate medicine and intelligent medical development is accelerated, and the advanced cross-discipline technology is finally, reliable technical support and support are provided for early screening, accurate typing, curative effect assessment and prognosis, and prediction of cancer are worthy of clinical value, and social value are realized. In the existing method, a topology analysis tool such as continuous coherence is connected in series with a segmentation flow, parameter setting depends on experience statistics, and the topology structure of a segmentation boundary is possibly unreasonable, such as an unexpected hole or fragment appears, and the flow is stiff. Therefore, a multi-modal image analysis method and system for cancer diagnosis is provided. Disclosure of Invention The invention aims to solve the defects in the prior art and provides a multi-mode image analysis method and a system for cancer diagnosis. In order to achieve the above purpose, the present invention adopts the following technical scheme: A multi-modal image analysis method and system for cancer diagnosis includes the following steps: collecting a two-dimensional projection sequence acquired by a gamma camera in operation as an original projection sequence, repairing the original projection sequence through a pre-trained self-supervision U-Net network, and generating a repaired projection sequence with higher signal-to-noise ratio and more complete angle coverage; performing multi-mode image space-time alignment and metabolic dynamics modeling based on the repaired projection sequence and the preoperative image to generate a registration image and an intraoperative metabolic distribution map; Processing the generated registration image and the voxel field reconstructed based on the restored projection by utilizing a joint optimization network embedded with micro-topology constraints, and synchronously generating an initial boundary map with a biological reasonable topological structure, a corresponding confidence map and an uncertainty map; Step four, implementing multi-round active refinement based on uncertainty propagation, positioning a high uncertainty area by integrating an uncertainty graph, and feeding back to the repair process of the step one and the joint optimization network of the step three for targeted adjustment until the uncertainty meets a preset condition; And fifthly, carrying out topology integrity verification on the initial boundary map, and generating a diagnosis report containing space positioning excision suggestions and confidence level classification by combining the adjustment result and clinical information of the step four. The above further comprises: Further, the specific steps of repairing the original projection sequence include: acquiring an original two-dimensional projection sequence acquired by an intraoperative gamma camera within a preset rotation angle range; Inputting the original two-dimensional projection sequence into a pre-trained self-supervision projection repair network, wherein the pre-trained self-supervision projection repair network adopts an encoder-decoder architecture, particularly a U-Net network structure, the self-supervision projection repair network is obtained by pre-training projection data in a large number of non-labeling operations, and a training target is to learn to recover complete and high-quality projection d