Search

CN-121999286-A - Combined brain tumor MRI classification method based on field alignment evidence

CN121999286ACN 121999286 ACN121999286 ACN 121999286ACN-121999286-A

Abstract

The invention belongs to the field of brain tumor classification, in particular to a combined brain tumor MRI classification method based on field alignment evidence, the method comprises the steps of predefining, integrating architecture, field alignment, evidence guided fusion, uncertainty weighted estimation based on fusion, uncertainty image weight reduction and output tumor subarea. The method realizes multi-scale bidirectional information complementation and generates shared embedding by establishing a characteristic interaction bridge through bidirectional cross attention, matches statistical properties of a source domain and a target domain, forces characteristic distribution trend to be identical by executing anti-alignment operation, effectively characterizes tumor cores and fuzzy boundaries, reduces omission and false positives, obtains difference priori through difference values of an original tumor MRI image and a reconstructed tumor-free contrast image, reduces weight of double-source evidence by using a pixel-level uncertainty graph, provides accurate and complete positioning guidance for a segmentation decoder, and improves positioning accuracy and boundary integrity of segmentation tasks.

Inventors

  • JIA WANG
  • ZHANG DAINAN
  • LIU YUQI
  • SUN CHEN
  • Yue Zehua

Assignees

  • 首都医科大学附属北京天坛医院

Dates

Publication Date
20260508
Application Date
20260123

Claims (3)

  1. 1. A combined brain tumor MRI classification method based on field alignment evidence is characterized by comprising the following steps: The method comprises the steps of S1, predefining, namely, simply marking an MRI image dataset with sufficient labels as a source domain, and simply marking an MRI image dataset with scarce labels and without labels as a target domain; S2, an integrated architecture is adopted, specifically, a lightweight ViT backbone network is adopted to construct a double-view encoder, the double-view encoder consists of a segmentation encoder and a classification encoder, a lightweight U-Net network is used to integrate a segmentation decoder, a bidirectional cross attention mechanism is adopted to establish a feature interaction bridge, the feature interaction bridge is responsible for multi-scale bidirectional information transfer between the segmentation encoder and the classification encoder, the segmentation encoder outputs a geometric segmentation related feature representing a structure, the classification encoder outputs a global discrimination classification related feature representing the geometric segmentation related feature representing the structure, the geometric segmentation related feature representing the structure output by the segmentation encoder and the global discrimination classification related feature representing the output by the classification encoder are complementary, and finally sharing embedding is generated; Step S3, aligning the fields, namely mapping the shared embedding into a segmentation feature and a classification feature, wherein the statistical properties comprise mean, variance and covariance, and forcing the shared embedding distribution corresponding to the source domain and the target domain to converge by matching the statistical properties of the source domain and the target domain; According to the statistical attribute, using an anti-alignment operation, obtaining segmentation features by making the distribution of the geometric segmentation related features of the representation structure in the source domain and the distribution of the geometric segmentation related features of the representation structure in the target domain be the same, and obtaining classification features by making the distribution of the global discrimination classification related features of the representation in the source domain and the global discrimination classification related features of the representation in the target domain be the same; step S4, evidence guiding fusion is used for obtaining a heat map, a difference priori and a pixel level uncertainty map; Step 5, based on the fused uncertainty weighted estimation, specifically, marking the difference priori and the heat map as multi-source weak supervision evidence, firstly performing weight reduction operation on the multi-source weak supervision evidence by using the pixel-level uncertainty map, and then converting the multi-source weak supervision evidence into a continuous priori probability map; step S6, uncertainty image weight reduction, namely, setting a threshold value, screening a continuous prior probability graph to obtain an neglected mask, and performing weight reduction treatment on the segmentation features and unreliable areas of the classification features according to the neglected mask; And S7, the segmentation decoder receives the segmentation features and the classification features, simultaneously integrates the segmentation features and the classification features into a continuous prior probability map, performs decoding operation, finally completes pixel-level tumor sub-region segmentation, and outputs uncertainty estimation corresponding to the tumor sub-region.
  2. 2. The field-aligned evidence-based combined brain tumor MRI classification method according to claim 1, characterized by: In step S4, the evidence-guided fusion specifically includes the following steps: Step 41, generating a heat map, namely calculating the weighted gradient of the classification characteristic aligned in the field and the high-level convolution characteristic map in the classification encoder to obtain the heat map, wherein a high-response region in the heat map is a key region strongly related to classification judgment, carrying out normalization operation on the heat map, and taking the heat map as initial positioning evidence of tumor segmentation under weak supervision; Step S42, generating a difference priori for reflecting the structural difference between the tumor and the tumor-free image, and converting the initial positioning evidence into a learnable priori, specifically, reconstructing a tumor-free control MRI image according to the original MRI image with the tumor, and taking the difference between the original MRI image with the tumor and the control MRI image as the difference priori; Step S43, obtaining a pixel-level uncertainty graph, namely continuously carrying out random forward propagation by adopting a Monte Carlo discarding method under the condition that a Dropout layer is reserved in a convolution network of the last two layers of a segmentation decoder based on segmentation characteristics, counting variances of tumor probability output by each pixel in the random forward propagation process, and arranging the variances of all pixels according to the resolution of an original MRI image after normalization operation to obtain the pixel-level uncertainty graph.
  3. 3. A method for MRI classification of united brain tumors based on field aligned evidence as claimed in claim 2, characterized by: In step S5, the fusion-based uncertainty weighted estimation specifically includes the following steps: step S51, taking intersection, namely taking intersection of a high response area of the heat map and a high response area of a difference priori; And S52, taking and specifically integrating non-overlapping union sets in the high response area and the difference priori high response area of the heat map, calculating and concentrating Euclidean distances from each pixel to the intersection sets, and filtering to obtain a continuous priori probability map by taking the Euclidean distances as weights.

Description

Combined brain tumor MRI classification method based on field alignment evidence Technical Field The invention relates to the field of brain tumor classification, in particular to a combined brain tumor MRI classification method based on field alignment evidence. Background The combined brain tumor MRI classification method based on field alignment evidence refers to a method for realizing brain tumor classification and segmentation under the conditions of few samples and weak supervision. In the existing approximation scheme, for example, CN119741312B is a method and a system for segmenting an MRI medical image based on deep learning, the scheme aims at the technical problems of low tumor recognition accuracy and insufficient classification precision caused by large interference of non-interested areas such as background noise, subcutaneous fat and the like in a breast MRI image, fuzzy details of lesion areas and unclear edges, adopts the technical means of standardized preprocessing, suspected lesion area positioning and detail enhancement, three-channel feature enhancement, deep learning recognition and multi-level SVM classification, firstly removes the non-interested areas through dynamic threshold segmentation and morphological operation, then generates a saliency map positioning abnormal side based on gray value gradient, reconstructs an image to be preprocessed through Gaussian fuzzy decomposition and dynamic enhancement, then generates three-channel input through noise reduction, contrast enhancement and edge enhancement, inputs a deep learning model containing a residual error module and an SE channel attention mechanism, finally extracts multi-dimensional features to realize fine preprocessing, high-efficiency tumor recognition and accurate classification of the breast MRI image, improves the detection accuracy and robustness, provides reliable diagnosis for clinical equipment, but has poor contrast and fuzzy detection threshold level offset, and poor detection and fuzzy detection level; In addition, for example, the CN118262105B is a multi-mode tumor image segmentation method and device based on full-supervised contrast learning, the scheme aims at the characteristics of unclear anatomical boundary, irregular shape and complex modal information of the multi-mode tumor image, the existing segmentation model only optimizes a network architecture and omits feature space mining, so that the technical problems of low segmentation accuracy of difficult-to-divide areas and insufficient voxel classification accuracy are solved, a technical means of combining full-supervised contrast learning, anchor point sampling and local positive and negative sample selection is adopted, a PCM module is introduced into a decoder, an error voxel and a correct voxel are determined through sample image prediction results and real label comparison, a difficult-to-divide anchor point is sampled based on segmentation window confidence, positive sample voxels and negative sample voxels with optimal confidence are selected in anchor point neighborhood, the accurate segmentation of tumor areas in the multi-mode tumor image is realized by combining InfoNCE contrast loss, cross entropy loss and position loss training model, the segmentation accuracy of the difficult-to-divide areas is remarkably improved, the position indexes of the areas reach the advanced technical effects, but the technical problems of lack of pertinence, the poor weak-supervised fusion evidence, the unreliable areas and the prior guiding of the areas in the traditional segmentation model exist. Disclosure of Invention Aiming at the technical problems that domain deviation exists in cross-equipment data in brain tumor MRI segmentation and a weak supervision scene lacks pixel-level labeling, so that model generalization capability is poor, segmentation boundaries are fuzzy and small focus is prone to missed detection, a lightweight ViT backbone network is adopted to construct a dual-view encoder, feature interaction bridges are established through bidirectional cross attention to realize multiscale bidirectional information complementation and generate shared embedding, domain alignment operation is carried out, statistical attributes of a source domain and a target domain are matched, anti-alignment operation is carried out, forced feature distribution trend convergence is carried out, then a heat map, difference priori and pixel-level uncertainty map are generated through evidence guide fusion, multi-source weak supervision evidence is converted into a continuous priori probability map through uncertainty weighted estimation, segmentation is omitted, finally fusion characteristics of a lightweight U-Net segmentation decoder and the prior map are carried out, promotion of cross-domain robustness and weak supervision precision is realized, tumor cores and false positive boundaries are effectively marked, the fuzzy boundary is reduced, the conventional map is