CN-122025041-A - Method and device for visualizing heterogeneous pathology of glioma based on multi-modal image
Abstract
The invention relates to the field of medical image processing and pathological diagnosis, and aims to solve the problems of low accuracy, weak correlation between heterogeneity characteristics and pathology and poor visualization readability of brain colloid rumen multi-mode image fusion. The method comprises the steps of multi-modal image preprocessing and registration, area self-adaptive multi-modal fusion, pathology associated heterogeneity feature quantification, multidimensional pathology visualization, result verification and optimization, wherein images are aligned through mutual information rigid registration, features are strengthened by adopting an area differential attention fusion algorithm, the quantized features of an LSSVM are improved, pathology gold standards are associated, and a three-dimensional visualization model is constructed to intuitively present heterogeneity information. The apparatus includes a memory and a processor, and is operable to perform the method. The method improves the heterogeneity evaluation accuracy and clinical readability, optimizes the boosting treatment scheme, and is suitable for the accurate diagnosis and treatment scene of the glioma.
Inventors
- Sun Hangzhe
- HAO LU
- JIANG QIAOYU
- XU CHUNMING
- ZHANG YAN
Assignees
- 温州医科大学
Dates
- Publication Date
- 20260512
- Application Date
- 20260202
Claims (8)
- 1. The method for visualizing the heterogeneous pathology of the glioma based on the multi-modal image is characterized by comprising the following steps of: Step 1, preprocessing and registering the multi-mode images, namely acquiring multi-mode MRI images of a brain glioma patient, wherein the multi-mode MRI images comprise T1 enhancement and T2-FLAIR, DWI, PWI; registering all the mode images to a space coordinate system of the T1 enhanced image by adopting a rigid registration algorithm based on mutual information to obtain registered multi-mode images, and ensuring the accurate alignment of the space positions of the different mode images; Step 2, improved area self-adaptive multi-modal image fusion, namely receiving registered multi-modal images, dividing a tumor core area, a edema area, an invasion area and a normal brain tissue area firstly, and then designing differential fusion strategies aiming at different areas, and adopting an improved attention weighted fusion algorithm to realize the accurate fusion of multi-modal characteristics; Step 3, quantifying the heterogeneity characteristics of the pathology-associated brain glioma, namely receiving the fused enhanced image, combining a preset pathology index association model, extracting deep heterogeneity characteristics related to cell proliferation, angiogenesis and invasion capacity, realizing the accurate quantification of the characteristics by adopting an improved characteristic calibration quantification algorithm, and establishing a mapping relation between the quantified characteristics and a pathology gold standard index; Step 4, multidimensional pathological visual presentation, namely constructing a three-dimensional visual model of space distribution-characteristic intensity-pathological association based on quantized heterogeneity characteristics, and realizing multidimensional visual presentation of heterogeneity information by adopting a layering pseudo-color rendering mode, a characteristic thermodynamic diagram superposition mode and a pathological index association labeling mode; And step 5, verifying and optimizing the visual result, namely verifying the accuracy of the visual result by combining the pathological section data of the patient, and reversely optimizing and fusing algorithm parameters and characteristic quantization models if the correlation between the quantization characteristic and the pathological index is lower than a preset threshold value until the result meets clinical requirements.
- 2. The method for visualizing heterogeneous pathology of brain glioma based on multi-modal images according to claim 1, wherein the rigid registration algorithm based on mutual information in step 1 registers all modal images to a spatial coordinate system of a T1 enhanced image, and the specific implementation steps of the registered multi-modal images are as follows: Step 1.1, setting a registration reference and a target, namely definitely taking a T1 enhanced image as the registration reference, namely a fixed image, taking a T2-FLAIR, DWI, PWI image as the registration target, namely a floating image, and providing stable space reference for the subsequent tumor region division; Step 1.2, initial space alignment, performing preliminary rigid transformation coarse alignment on each floating image, and eliminating large-scale space offset, wherein manual or automatic marking is performed based on brain anatomical landmark points, and the floating images are aligned with landmark points corresponding to a reference image through rigid transformation, namely translation and rotation operations are included, so that the scaling and deformation of the images are not changed, the stability of the anatomical structure of the brain hard tissue is met, the landmark points of the floating images are aligned with the landmark points corresponding to the reference image, the floating images after preliminary alignment are obtained, and the iteration times of subsequent fine registration are reduced; Step 1.3, constructing and optimizing a registration function based on mutual information, and constructing a registration objective function by adopting the mutual information as a registration similarity measurement index: wherein As reference image And floating image Is used for the mutual information of the (a), 、 Respectively is 、 Is used for the information entropy of (a), Is that And (3) with Carrying out maximized solution on the objective function by adopting a Powell optimization algorithm; Step 1.4, registration result verification and post-processing, adopting root mean square error Overlap ratio of Verifying registration accuracy by calculating anatomical region corresponding to the registered floating image and reference image Requirements are that Calculating overlap ratio of suspected tumor region Requirements are that And if the precision requirement is not met, readjusting the initial alignment mark points, repeating the optimization process of the steps 1.1-1.3, and carrying out interpolation processing on each mode image by adopting a cubic linear interpolation algorithm after the registration is qualified, so as to ensure that the resolution of the registered image is consistent with that of a reference image, and finally outputting all the multi-mode image data sets with spatial alignment, thereby providing high-quality spatial matching data for the accurate division and the region self-adaptive fusion of the subsequent tumor region.
- 3. The method for visualizing a heterogeneous pathology of a glioma based on multimodal images according to claim 1, wherein the improved local adaptive multimodal image fusion of step 2 is implemented as follows: Step 2.1, primarily dividing a tumor region, namely primarily dividing a tumor core region, a edema region, an invasion region and a normal brain tissue region by adopting an improved Otsu threshold segmentation algorithm based on registered T1 enhancement and T2-FLAIR images; Step 2.2, initializing region specific modal weight: Initializing a modal weight reference value of each region according to the pathological characteristics of different regions and the sensitivity difference of each modal image: a. tumor core region T1 enhanced image is sensitive to blood brain barrier destruction and tumor proliferation core, and weight reference value PWI images are sensitive to tumor angiogenesis and weight baseline values DWI images are sensitive to cell density, weight baseline T2-FLAIR image weight reference value ; B. edema region T2-FLAIR image is sensitive to edema degree, weight reference value DWI images were sensitive to tumor invading cells in the edema area, weight baseline T1 enhanced image weight reference value PWI image weight reference value ; C. the affected area is DWI image sensitive to the diffusion motion of affected cell and the weight reference value T2-FLAIR image weight reference value T1 enhanced image weight reference value PWI image weight reference value ; D. normal brain tissue region, wherein the weight of each mode image is equal, and the reference value is 0.25; step 2.3, the improved attention weighted fusion algorithm is realized: The space attention mechanism and the channel attention mechanism are introduced, the initialization weight is dynamically adjusted, and the precise fusion of the multi-mode images is realized: Feature extraction, namely extracting deep feature mapping maps from different areas of each modal image by adopting 3D convolutional neural network 3D-CNN Wherein Corresponding to T1 enhancement, PWI, DWI, T2-FLAIR, respectively; Attention weight calculation, namely constructing a spatial attention module and calculating the spatial attention of pixels of each image Constructing a channel attention module, and calculating the attention degree of each modal characteristic channel Dynamically adjusting weights Wherein As a result of the initial reference weight, Is the dynamic weight after adjustment and meets ; Feature fusion, namely fusing deep features of all modes by adopting a weighted summation mode: Performing deconvolution operation on the fused feature map, and recovering to the original image resolution to obtain an area self-adaptive fusion enhanced image; step 2.4, enhancing and verifying the fused image, namely performing self-adaptive histogram equalization processing on the fused image, enhancing the edge characteristics of the heterogeneous region, and adopting peak signal-to-noise ratio Index of similarity to structure Verify the fusion accuracy if And is also provided with And if not, adjusting the convolution kernel parameters of the 3D-CNN and the weight coefficient of the attention module, and re-executing the fusion process until the precision requirement is met.
- 4. The method for visualizing a heterogeneous pathology of a glioma based on multimodal images as claimed in claim 3, wherein the primary tumor region division of step 2.1 is implemented by: Step 2.11, image preprocessing enhancement, namely, respectively carrying out targeted preprocessing on registered T1 enhancement, T2-FLAIR and DWI images, improving segmentation accuracy, namely, adopting self-adaptive histogram equalization processing on the T1 enhancement images to enhance the gray contrast of a tumor enhancement region and normal brain tissues, adopting Gaussian filtering on the T2-FLAIR images to remove noise caused by cerebral cerebrospinal fluid fluctuation, and reserving high signal characteristics of an edema region; step 2.12, improved Otsu threshold calculation, wherein a double-threshold segmentation strategy is adopted to adapt to different tumor region characteristics, namely, aiming at a T1 enhanced image, the double-threshold calculation is carried out through an improved Otsu algorithm And Wherein For the threshold of differentiation of normal brain tissue from tumor areas, For the distinguishing threshold value of the tumor core area and the non-core area, the gray value is larger than Is determined to be a tumor core region by improving Otsu algorithm for T2-FLAIR image Will gray value greater than Is determined as an edema region candidate region; Step 2.13, accurately extracting an affected area, calculating an apparent diffusion coefficient ADC graph based on the preprocessed DWI image, wherein the ADC value calculation adopts the formula: wherein For the signal strength without the diffusion weighting, In order to apply the signal strength after the diffusion weight, Value taking Setting up Value range Extracting a region in the range from the DWI image as an attack region candidate region for an attack region judgment threshold value, and removing a part overlapping with the judged tumor core region; region deduplication and correction, namely performing deduplication correction on a tumor core region, an edema region candidate region and an invasion region candidate region by adopting a region growing algorithm, namely keeping the whole region of the tumor core region, removing the part, overlapping with the tumor core region and the invasion region, of the edema region candidate region to obtain a pure edema region, and overlapping with but conforming to the edema region in the invasion region candidate region The part of threshold requirement is reserved, namely the affected cell area in the edema area, so that the clear and redundancy-free boundary of each area is ensured; Step 2.14, defining and morphological optimizing a normal brain tissue region, and defining a region which is not covered by a tumor core region, a edema region and an invasion region in the T1 enhanced image as the normal brain tissue region; and performing morphological filtering optimization on all the divided regions, performing expansion operation of 3 multiplied by 3 voxels, performing corrosion operation of the same size, and finally obtaining a precise tumor region division result with smooth boundary and complete region, thereby providing an accurate region mask for subsequent region specific modal weight initialization.
- 5. The multi-modal image-based brain glioma heterogeneity pathology visualization method according to claim 1, wherein the pathology-associated brain glioma heterogeneity characterization implementation step of step 3 is as follows: step 3.1, constructing a pathology associated characteristic system: combining with a pathology mechanism of the glioma, constructing a pathology associated feature system comprising 3 types of features, namely 3 image feature vectors: cell proliferation related features based on fusion image extraction Value gradient, proliferation index ; Extracting distribution characteristics of relative cerebral blood volume rCBV, relative cerebral blood flow rCBF and vascular permeability surface product PS based on the fusion image; extracting gray gradient, texture complexity and invasion distance of tumor edges based on the fusion image; Step 3.2, training a pathology correlation model: Collecting fusion image data of a large number of brain glioma patients and corresponding pathological gold standard data, wherein the pathological gold standard data comprise Ki-67 index corresponding cell proliferation, CD34 microvascular density corresponding angiogenesis and invasion capacity corresponding to invasion range pathological section observation results, and constructing a training data set: marking the pathological section by a pathologist, and determining the quantized value of each pathological index; model construction, namely adopting an improved random forest algorithm to construct a correlation model of image features and pathological indexes The model is input into the image feature vector extracted in the step 3.1, and is output into a corresponding pathology index predicted value; Model optimization, namely introducing an adaptive particle swarm optimization algorithm APSO to optimize the number and depth of decision trees of a random forest, and adopting a 5-fold cross validation optimization association model Parameters, make the association model Mean square error of predicted value and pathological gold standard value Satisfy the following requirements ; 3.3, Performing improved LSSVM characteristic calibration and quantization, namely performing calibration and quantization on the extracted image characteristic by adopting an improved least square support vector machine LSSVM, and improving the pathological matching precision of a quantization result; step 3.4, establishing and verifying a characteristic-pathology mapping relation: Establishing mapping table of quantized feature values and pathological index actual values, determining pathological significance corresponding to different quantized values, verifying correlation of quantized feature and pathological gold standard by Pearson correlation analysis, and if the correlation coefficient is And if not, supplementing training data, adjusting parameters of the association model and the LSSVM model, and re-executing the quantization process in the step 3.3 until the correlation requirement is met.
- 6. The method for visualizing a heterogeneous pathology of a glioma based on multimodal images according to claim 1, wherein the improved LSSVM feature calibration quantification of step 3.3 is implemented as follows; step 3.31, feature pretreatment and standardized optimization: before feature normalization, outlier rejection and dimension screening are carried out to ensure the validity of input features, namely firstly adopting The method comprises the steps of selecting a model, selecting a rule, eliminating abnormal values in image features, sorting the feature importance of a pathology correlation model, reserving core features with the weight of 80%, and finally performing Z-score standardization processing to normalize the image features to a unified dimension with a mean value of 0 and a variance of 1, eliminating dimension differences among different features and ensuring fairness of model training; step 3.32, construction and parameter initialization of the improved LSSVM model: the mixed kernel function design adopts the linear combination of radial basis kernel function and polynomial kernel function as the kernel function of the improved LSSVM, and the mixed kernel function expression is that Wherein the first term is a radial basis kernel function and the second term is a polynomial kernel function; as a function of the weight coefficient of the kernel, As a width parameter of the radial basis function, Is the offset of the polynomial kernel function, Degree of polynomial kernel function; initializing parameters, namely initializing initial values of core parameters based on statistical characteristics of medical image features and pathological data: , , , simultaneously setting model punishment parameters The initial range of (2) is ; Step 3.33, training and parameter optimization of the improved LSSVM model: Constructing a training data set by taking the pathological gold standard data marked in the step 3.2 as a label, and optimizing the core parameters of the improved LSSVM by adopting an adaptive particle swarm optimization algorithm APSO: The fitness function is set to improve the mean square error of the LSSVM model predicted value and the pathological gold standard value The minimization is targeted, and the fitness function is: wherein In order to train the number of samples, Is the first The pathological gold standard actual values of the individual samples, For model pair number Predicted values for the individual samples; APSO parameter optimizing, setting particle group scale as 30, maximum iteration number as 50 and inertial weight Initial value is 0.9, linearly decreases to 0.4 in the iterative process, and learns the factor 、 Obtaining the optimal parameter combination through iterative search of particle swarm And ensuring that the model prediction precision meets the preset requirement. And model training, namely substituting the optimal parameter combination into an improved LSSVM model, solving Lagrangian multipliers and bias terms of the model by adopting a least square optimizing method, and finishing the improved LSSVM model training to obtain a stable characteristic calibration quantization model. And 3.34, accurately quantifying the characteristics and initially verifying the results: Feature quantization is carried out, namely the core features standardized in the step 3.31 are input into a trained improved LSSVM model, the improved LSSVM model is output into a calibrated quantized feature value, the system deviation between the image features and the pathological indexes is eliminated, and the relative intensity of the pathological indexes can be mapped directly; Preliminary verification, namely, calculating Pearson correlation coefficients of quantitative characteristic values and pathological gold standard actual values by taking 30% of a training set as a verification set If (1) If the model is qualified, entering the subsequent feature-pathology mapping link, if And adjusting the optimizing range and the iteration times of the APSO, and re-executing parameter optimizing and model training until the primary verification requirement is met. And (3) quantitative grading, namely grading the quantitative characteristic values into relative grades corresponding to pathological indexes based on clinical standards of pathological diagnosis, and providing grade labels for subsequent multidimensional visual presentation.
- 7. An electronic device comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, implements the multi-modal image-based brain glioma heterogeneity pathology visualization method according to any one of claims 1-6.
- 8. A computer readable storage medium, having stored thereon a computer program which, when executed by a processor, implements the multi-modal image-based brain glioma heterogeneity pathology visualization method according to any one of claims 1-6.
Description
Method and device for visualizing heterogeneous pathology of glioma based on multi-modal image Technical Field The invention relates to the field of medical image processing and pathological diagnosis, in particular to a method and a device for visualizing heterogeneous pathology of brain glioma based on multi-modal images. Background Gliomas are the most common malignant tumors of the central nervous system and are characterized by a high degree of heterogeneity within the tumor, including differences in cell morphology, proliferative activity, invasive capacity, angiogenesis, and molecular expression, which directly affect treatment regimen selection, efficacy assessment, and prognosis. The current brain glioma heterogeneity assessment and visualization method based on multi-modal images has the following specific technical problems, and severely restricts the accuracy of clinical diagnosis and treatment: the existing method mostly adopts a simple pixel-level or feature-level fusion strategy, does not consider the complementarity difference of different modal images (such as T1 enhancement and T2-FLAIR, DWI, PWI of MRI) on reflecting the heterogeneity characteristics of glioma, does not conduct differential fusion on the image characteristic differences of different areas such as a tumor core area, a edema area and an invasion area, and causes that the fused images cannot accurately represent heterogeneous core information, and the problem of feature confusion or loss occurs. For example, T1 enhanced images are sensitive to tumor blood brain barrier disruption regions, DWI is sensitive to tumor cell density, but existing fusion methods do not highlight such targeted features, affecting heterogeneous region division accuracy. The heterogeneity feature quantification and pathology association are insufficient, namely the extracted heterogeneity feature of the existing method is mostly an image apparent feature (such as gray average value and texture entropy), deep features directly related to pathology mechanism (such as diffusion coefficient gradient related to cell proliferation and perfusion parameter distribution related to angiogenesis) are not deeply excavated, and the feature quantification process is not combined with pathological gold standard (such as immunohistochemical Ki-67 index and CD34 microvascular density) for calibration, so that the quantification result is disjointed from actual pathological heterogeneity, and accurate pathological level reference cannot be provided for clinic. The existing visualization method mostly adopts a simple pseudo-color labeling mode, only can display the spatial distribution of heterogeneous areas, cannot intuitively display the intensity levels, correlations and corresponding relations with pathological indexes of different heterogeneous characteristics, is difficult for a clinician to quickly judge the malignancy degree, invasion range and pathological type of the tumor through the visualization result, and is not beneficial to treatment decision making. Aiming at the problems, the prior art does not form a complete technical link of 'accurate multi-mode fusion-pathology associated feature quantification-multi-dimensional visual presentation'. Therefore, the design of the multi-mode image-based brain glioma heterogeneity pathology visualization method solves the problems of low fusion precision, weak characteristic-pathology correlation and poor visualization readability in the prior art, and becomes a key technical breakthrough point for improving the accurate diagnosis and treatment level of brain glioma. Disclosure of Invention The invention aims to provide a method and a device for visualizing heterogeneous pathology of glioma based on multi-modal images, so as to solve the problems in the prior art. In order to achieve the aim, the invention provides the following technical scheme that the method for visualizing the heterogeneous pathology of the glioma based on the multi-mode image comprises the following steps: Step 1, preprocessing and registering multi-modal images, namely acquiring multi-modal MRI images of a brain glioma patient, wherein the multi-modal MRI images comprise T1 enhancement and T2-FLAIR, DWI, PWI, preprocessing (denoising, gray level normalization and skull peeling) the acquired modal images, registering all the modal images to a space coordinate system of the T1 enhancement images by adopting a rigid registration algorithm based on mutual information, and obtaining registered multi-modal images so as to ensure accurate alignment of space positions of different modal images; Step 2, improved area self-adaptive multi-modal image fusion, namely receiving registered multi-modal images, dividing a tumor core area, a edema area, an invasion area and a normal brain tissue area firstly, and then designing differential fusion strategies aiming at different areas, and adopting an improved attention weighted fusion algorithm to realize the ac