Search

CN-121982394-A - Glioma image segmentation method and glioma image segmentation system based on multi-feature fusion

CN121982394ACN 121982394 ACN121982394 ACN 121982394ACN-121982394-A

Abstract

The invention provides a glioma image segmentation method and a glioma image segmentation system based on multi-feature fusion, and relates to the technical field of medical image processing, wherein the method comprises the steps of preprocessing multi-mode image data to obtain spatially aligned multi-mode image data; the method comprises the steps of respectively extracting texture and metabolic characteristics of preoperative images, gray gradient and morphological characteristics of intra-operative cone beam CT images and microscopic texture characteristics of intra-operative optical coherence tomography images based on space alignment multi-modal image data, constructing a multi-modal spatial characteristic field by fusing the morphological characteristics of the intra-operative cone beam CT images and the microscopic texture characteristics of the intra-operative optical coherence tomography images, determining a spatial evaluation origin, a main evaluation axis and an auxiliary evaluation axis based on the multi-modal spatial characteristic field, defining a dynamic spatial evaluation sector based on the spatial evaluation origin, and laying a series of spatial characteristic observation points in the dynamic spatial evaluation sector. The method can realize the quantification of the feature reliability, further complete the self-adaptive accurate feature fusion and improve the segmentation precision of the glioma subregions.

Inventors

  • LI YIXUAN
  • WANG YING
  • Yue Juanqing
  • ZHANG YINGYING
  • Jia Mengxian

Assignees

  • 杭州市第一人民医院(西湖大学附属杭州市第一人民医院)

Dates

Publication Date
20260505
Application Date
20260121

Claims (8)

  1. 1. The glioma image segmentation method based on the fusion of various features is characterized by comprising the following steps of: Acquiring multi-mode image data comprising preoperative multi-sequence magnetic resonance images, intraoperative cone beam CT images and optical coherence tomography images; preprocessing the multi-mode image data to obtain space-aligned multi-mode image data; based on the multi-mode image data of spatial alignment, respectively extracting texture and metabolic characteristics of the preoperative image, gray gradient and morphological characteristics of the intraoperative cone beam CT image and microscopic texture characteristics of the intraoperative optical coherence tomography image; Based on the multi-mode space characteristic field, determining a space evaluation origin, a main evaluation axis and an auxiliary evaluation axis, and defining a dynamic space evaluation sector according to the space evaluation origin, the main evaluation axis and the auxiliary evaluation axis; the method comprises the steps of setting up a series of space feature observation points in a dynamic space evaluation sector, generating a space-time track cluster reflecting feature evolution according to time sequence association of an intraoperative image, calculating local curvature consistency and space vergence index of the space-time track cluster, and generating a space consistency correction factor for quantifying local reliability of the intraoperative image feature; according to the spatial consistency correction factor, performing self-adaptive weight feature fusion on texture and metabolic features, gray gradient and morphological features and microscopic texture features to generate a dynamic fusion feature map; inputting the dynamic fusion characteristic diagram into a segmentation network, and segmenting to obtain glioma subregion results comprising the enhanced tumor region, the necrotic core region and the perineoplastic edema region.
  2. 2. The glioma image segmentation method based on the multi-feature fusion according to claim 1, wherein preprocessing the multi-modal image data to obtain spatially aligned multi-modal image data comprises: Receiving multi-mode image data, and carrying out format unification and noise filtration on an intraoperative cone beam CT image and an optical coherence tomography image to obtain intraoperative image data with unified format; Taking preoperative multi-sequence magnetic resonance images as spatial reference datum, and carrying out rigid registration on the intraoperative image data with uniform format to obtain a spatial coarse registration result; based on the spatial coarse registration result, carrying out non-rigid local deformation correction on the intraoperative cone beam CT image and the optical coherence tomography image so as to establish an accurate spatial corresponding relation; resampling the preoperative multi-sequence magnetic resonance image, the intraoperative cone beam CT image and the optical coherence tomography image to the same space coordinate system and resolution according to the established accurate spatial correspondence, and generating geometrically aligned image data; Intensity normalization and offset field correction are performed on geometrically aligned image data to generate final spatially aligned multi-modal image data.
  3. 3. The glioma image segmentation method based on multi-feature fusion according to claim 2, wherein the steps of respectively extracting texture and metabolic features of the preoperative image, gray-scale gradient and morphological features of the intra-operative cone beam CT image, and microscopic texture features of the intra-operative optical coherence tomography image based on spatially aligned multi-modal image data to construct a unified cross-modal feature space include: Performing high-order texture analysis and perfusion parameter mapping on preoperative multi-sequence magnetic resonance images in the final spatially aligned multi-mode image data to generate a texture feature map and a metabolic feature map of the preoperative images; Performing three-dimensional gradient operation and partial structure tensor analysis on the intraoperative cone beam CT image in the final space aligned multi-mode image data to generate a gray scale gradient feature map and a morphological feature map of the intraoperative cone beam CT image; performing multi-scale filtering and local mode statistics on the intraoperative optical coherence tomography image in the final spatially aligned multi-mode image data to generate a microscopic texture feature map of the intraoperative optical coherence tomography image; And fusing the texture feature map, the metabolic feature map, the gray gradient feature map, the morphological feature map and the micro texture feature map to construct a unified cross-modal feature space.
  4. 4. The glioma image segmentation method based on multi-feature fusion according to claim 3, wherein the step of fusing morphological features of the intraoperative cone beam CT image with microscopic texture features of the intraoperative optical coherence tomography image to construct a multi-modal spatial feature field, the step of determining a spatial assessment origin, a main assessment axis and an auxiliary assessment axis based on the multi-modal spatial feature field, and defining a dynamic spatial assessment sector based on the spatial assessment origin, the main assessment axis and the auxiliary assessment axis comprises: Spatially fusing the morphological feature map of the intraoperative cone beam CT image with the microscopic texture feature map of the intraoperative optical coherence tomography image to generate a multi-modal feature fusion map; based on the multi-modal feature fusion graph, extracting voxel coordinates and feature vectors of which feature values are higher than a preset threshold value, and constructing a multi-modal space feature field; performing density peak cluster analysis on the multi-mode spatial feature field to determine the center of the region with the most dense feature distribution as a spatial evaluation origin; Calculating the main distribution direction of the multi-mode spatial characteristic field in a preset radius R range by using a principal component analysis, wherein the main distribution direction is centered on a spatial evaluation origin, determining a first principal component direction as a main evaluation axis, and determining a second principal component direction which is orthogonal to the main evaluation axis and represents the maximum variance as an auxiliary evaluation axis; The dynamic space evaluation sector is defined in three-dimensional space by taking the space evaluation origin as a vertex and taking the main evaluation axis and the auxiliary evaluation axis as boundary directions.
  5. 5. The glioma image segmentation method based on multiple feature fusion according to claim 4, wherein the steps of arranging a series of spatial feature observation points in the dynamic space evaluation sector, generating a space-time track cluster reflecting feature evolution according to time sequence association of the intra-operative image, calculating local curvature consistency and spatial vergence indexes of the space-time track cluster, and generating a spatial consistency correction factor for quantifying local reliability of the intra-operative image feature comprise: a plurality of space feature observation points are distributed in the dynamic space evaluation sector according to a preset rule along the radial direction and the circumferential direction of the dynamic space evaluation sector; acquiring multi-temporal intra-operative image features corresponding to the spatial feature observation points, and connecting feature positions of the same observation point at different moments in time sequence to generate a space-time track cluster, wherein the space-time track cluster comprises spatial position information and feature value information and is used for reflecting a time sequence evolution rule of the features; Performing discrete curvature calculation on each track in the time-space track cluster, and counting the curvature consistency of all tracks at corresponding time sequence points to obtain a local curvature consistency index; calculating the spatial distribution discrete degree of all observation point positions of the space-time track cluster under each time sequence to obtain a spatial vergence index; And fusing the local curvature consistency index and the spatial vergence index to generate a spatial consistency correction factor for quantifying the local reliability of the image features in the operation.
  6. 6. The glioma image segmentation method based on multiple feature fusion according to claim 5, wherein the feature fusion of the adaptive weights is performed on texture and metabolic features, gray gradient and morphological features, and microscopic texture features according to the spatial consistency correction factor, and the dynamic fusion feature map is generated, comprising: Respectively calculating dynamic weight coefficients of texture and metabolic characteristics, gray gradient and morphological characteristics and microscopic texture characteristics at various positions in space according to the spatial consistency correction factors; carrying out voxel-by-voxel weighted fusion on the texture and metabolic feature map, the gray gradient and morphological feature map and the microscopic texture feature map by utilizing the dynamic weight coefficient to generate a primary fusion feature map; And performing cross-channel feature interaction and space continuity optimization on the primary fusion feature map to generate a final dynamic fusion feature map.
  7. 7. The glioma image segmentation method based on the multi-feature fusion according to claim 6, wherein the step of inputting the dynamic fusion feature map into a segmentation network to obtain glioma subregion results including enhanced tumor regions, necrotic core regions and perineoplastic edema regions comprises: Inputting the final dynamic fusion feature map into a segmentation network of the encoder and decoder structure; Processing the dynamic fusion feature map through a multi-level feature extraction and up-sampling path of the segmentation network to generate an initial tumor subregion segmentation probability map; carrying out morphological post-treatment and region connectivity analysis on the initial tumor subregion segmentation probability map, and outputting a final glioma subregion segmentation result comprising an enhanced tumor region, a necrotic core region and a perineoplastic edema region.
  8. 8. Glioma image segmentation system based on multiple feature fusion, which implements the method according to any one of claims 1 to 7, comprising: The data acquisition module is used for acquiring multi-mode image data comprising preoperative multi-sequence magnetic resonance images, intraoperative cone beam CT images and optical coherence tomography images; The data preprocessing module is used for preprocessing the multi-mode image data to obtain space-aligned multi-mode image data; The feature extraction module is used for respectively extracting texture and metabolic features of the preoperative image, gray gradient and morphological features of the intraoperative cone beam CT image and microscopic texture features of the intraoperative optical coherence tomography image according to the spatially aligned multi-mode image data; The system comprises a construction and definition module, a multi-mode spatial characteristic field, a dynamic spatial assessment sector, a dynamic spatial assessment module and a dynamic spatial assessment module, wherein the construction and definition module is used for fusing morphological characteristics of a cone beam CT image in an operation and microscopic texture characteristics of an optical coherence tomography image in the operation to construct the multi-mode spatial characteristic field; The system comprises a dynamic space evaluation sector, a layout and calculation module, a space-time track cluster generation module, a space consistency correction factor generation module and a space-time track cluster generation module, wherein the dynamic space evaluation sector is used for laying a series of space feature observation points; The self-adaptive fusion module is used for carrying out self-adaptive weight feature fusion on texture and metabolic features, gray gradient and morphological features and microscopic texture features according to the space consistency correction factors to generate a dynamic fusion feature map; And the final execution module is used for inputting the dynamic fusion characteristic diagram into a segmentation network to obtain glioma subregion results comprising the enhanced tumor region, the necrotic core region and the perineoplastic edema region.

Description

Glioma image segmentation method and glioma image segmentation system based on multi-feature fusion Technical Field The invention relates to the technical field of medical image processing, in particular to a glioma image segmentation method and system based on multi-feature fusion. Background The method is used for accurately dividing the enhanced tumor region, the necrotic core region and the peritumor edema region of glioma, and is crucial to the formulation of an operation scheme, the navigation in operation and the prognosis evaluation, wherein the boundary between the tumor and normal brain tissue is fuzzy, the pathological characteristic difference of each subregion is obvious, and the division precision is required to be improved by relying on multi-mode image data. Currently, fusion applications of preoperative multisequence magnetic resonance images, intraoperative cone beam CT images and optical coherence tomography images have become the dominant technical direction. However, the existing fusion segmentation method has the technical defect that the local reliability of the image features in the operation is not quantized, and the cross-modal feature fusion is directly carried out by adopting a fixed weight or a simple self-adaptive strategy. The defects can cause a series of problems that the intraoperative images are easily affected by operation, equipment noise and tissue deformation, the feature reliability differences of different spatial positions are obvious, for example, the intraoperative image features of tumor edges are easily biased due to tissue movement, fixed weight fusion can lead the unreliable features to be equally contributed to the preoperative stable features, the boundary segmentation of the subregions is fuzzy, the distinguishing precision of the tumor region and the perineoplastic edema region is particularly affected and enhanced, the intraoperative images have time sequence relevance, the tumor features dynamically evolve along with the operation progress, the local reliability is not verified through the feature evolution rule of time-space dimension in the prior art, the fused features can not accurately reflect the true pathological state of the tumor, and the subregion error is finally caused. The problems severely restrict the application effect of multi-mode image fusion in glioma precise segmentation, and are difficult to meet the severe requirements of clinical operations on subregion segmentation precision. Disclosure of Invention The invention aims to solve the technical problem of providing a glioma image segmentation method and a glioma image segmentation system based on multiple feature fusion, which can realize feature reliability quantification, further complete self-adaptive accurate feature fusion and improve glioma subregion segmentation precision. In order to solve the technical problems, the technical scheme of the invention is as follows: In a first aspect, a glioma image segmentation method based on multiple feature fusion, the method comprising: Acquiring multi-mode image data comprising preoperative multi-sequence magnetic resonance images, intraoperative cone beam CT images and optical coherence tomography images; preprocessing the multi-mode image data to obtain space-aligned multi-mode image data; based on the multi-mode image data of spatial alignment, respectively extracting texture and metabolic characteristics of the preoperative image, gray gradient and morphological characteristics of the intraoperative cone beam CT image and microscopic texture characteristics of the intraoperative optical coherence tomography image; Based on the multi-mode space characteristic field, determining a space evaluation origin, a main evaluation axis and an auxiliary evaluation axis, and defining a dynamic space evaluation sector according to the space evaluation origin, the main evaluation axis and the auxiliary evaluation axis; the method comprises the steps of setting up a series of space feature observation points in a dynamic space evaluation sector, generating a space-time track cluster reflecting feature evolution according to time sequence association of an intraoperative image, calculating local curvature consistency and space vergence index of the space-time track cluster, and generating a space consistency correction factor for quantifying local reliability of the intraoperative image feature; according to the spatial consistency correction factor, performing self-adaptive weight feature fusion on texture and metabolic features, gray gradient and morphological features and microscopic texture features to generate a dynamic fusion feature map; inputting the dynamic fusion characteristic diagram into a segmentation network, and segmenting to obtain glioma subregion results comprising the enhanced tumor region, the necrotic core region and the perineoplastic edema region. Wherein the cross-modal feature space is a high-dimensional feature set formed