Search

CN-122023788-A - Perfusion image focus area identification method and device, storage medium and electronic equipment

CN122023788ACN 122023788 ACN122023788 ACN 122023788ACN-122023788-A

Abstract

The application discloses a perfusion image focus area identification method, a perfusion image focus area identification device, a storage medium and electronic equipment, and relates to the technical field of image processing, wherein the method comprises the steps of preprocessing a perfusion image to be identified to obtain first images with different modal parameters; the method comprises the steps of performing downsampling feature extraction on a first image by adopting an encoding network of an image recognition model to obtain a first feature image and a second feature image, performing feature fusion on the first feature image by adopting a feature fusion network of the image recognition model to obtain a third feature image of a deep network layer of the encoding network, performing stitching processing on the second feature image or the third feature image of the same network layer of the encoding network to obtain stitching images respectively corresponding to all network layers of the encoding network, and performing upsampling feature fusion on all the stitching images by adopting a decoding network of the image recognition model to obtain a focus region recognition result of a perfusion image. The method of the application can improve the identification accuracy of the focus area.

Inventors

  • QI SHOULIANG
  • ZANG PEIZHUO
  • JU RONGHUI
  • ZHAO HUIHE
  • CHEN SHANNAN
  • LIU LINGKAI
  • ZHOU BO
  • LI HONGYI

Assignees

  • 东北大学
  • 辽宁省人民医院

Dates

Publication Date
20260512
Application Date
20251218

Claims (10)

  1. 1. A perfusion image lesion area identification method, comprising: preprocessing a perfusion image to be identified to obtain first images respectively corresponding to different modal parameters; Performing downsampling feature extraction on the first image by adopting a coding network of a pre-trained image recognition model to obtain a first feature image of a deep network layer of the coding network corresponding to the first image and a second feature image of a shallow network layer of the coding network corresponding to the first image; Performing feature fusion on the first feature image by adopting a feature fusion network of the image recognition model to obtain a third feature image of the deep network layer of the coding network corresponding to the first image; respectively splicing the second characteristic image or the third characteristic image of the same network layer of the coding network to obtain spliced images respectively corresponding to all network layers of the coding network; and carrying out up-sampling feature fusion on each spliced image by adopting a decoding network of the image recognition model to obtain a focus region recognition result of the perfusion image.
  2. 2. The method of claim 1, wherein the performing downsampling feature extraction on the first image by the coding network using the pre-trained image recognition model to obtain a first feature image of a deep network layer of the coding network corresponding to the first image and a second feature image of a shallow network layer of the coding network corresponding to the first image, specifically comprises: Performing independent downsampling feature extraction layer by layer for the first image corresponding to the same modal parameter from a first shallow network layer of the coding network, wherein the output of the former network layer serves as the input of the latter network layer until the downsampling feature extraction is performed on the output of the former deep network layer by adopting a bottom deep network layer of the coding network, so as to obtain a first feature image of the deep network layer of the coding network corresponding to the first image and a second feature image of the shallow network layer of the coding network; The independent downsampling feature extraction process comprises feature extraction, feature transformation, quasi-normalization processing, nonlinear mapping processing and maximum pooling processing.
  3. 3. The method of claim 1, wherein the feature fusion network that uses the image recognition model performs feature fusion on the first feature image to obtain a third feature image of the deep network layer of the coding network that corresponds to the first image, and specifically includes: Pairing the first characteristic images of the same deep network layer based on the correlation of the modal parameters to obtain a plurality of first characteristic image pairs; Adopting a spatial attention module of the feature fusion network to perform spatial fusion processing on the same first feature image pair to obtain a spatial fusion feature image; Carrying out channel fusion processing on the spatial fusion characteristic images by adopting a channel attention module of the characteristic fusion network to obtain channel fusion characteristic images; and respectively carrying out superposition processing on the channel fusion characteristic images and the first characteristic images in the same first characteristic image pair to obtain the third characteristic image corresponding to each first characteristic image.
  4. 4. The method of claim 3, wherein the spatial fusion processing is performed on the same first feature image pair by the spatial attention module of the feature fusion network to obtain a spatial fusion feature image, and the method specifically includes: extracting boundary information of each first feature image in the same first feature image pair to obtain boundary feature images corresponding to each first feature image; performing feature interaction and weighting processing based on each boundary feature image to obtain a feature relation matrix; And carrying out image fusion processing on each first characteristic image based on the characteristic relation matrix to obtain the spatial fusion characteristic image.
  5. 5. The method of claim 3, wherein the channel attention module using the feature fusion network performs channel fusion processing on the spatially fused feature image to obtain a channel fused feature image, and specifically includes: Performing dot product fusion processing on each first characteristic image in the same first characteristic image pair to obtain a dot product fusion image; calculating based on the dot product fusion image and a transposed image of the dot product fusion image to obtain a channel autocorrelation matrix; Multiplying operation is carried out based on the autocorrelation matrix and a transpose matrix of the autocorrelation matrix, so that a channel attention weight matrix is obtained; and multiplying the channel attention weight matrix and the dot product fusion image to obtain the channel fusion characteristic image.
  6. 6. The method of claim 1, wherein the performing upsampling feature fusion on each of the stitched images by the decoding network using the image recognition model to obtain a lesion area recognition result of the perfusion image specifically comprises: Aiming at a bottom network layer and a secondary bottom network layer of the decoding network, performing image reconstruction on the spliced image by adopting a decoding space of the decoding network and a channel attention network to obtain an initial reconstructed image; And starting from a secondary bottom layer network layer of the decoding network, performing up-sampling feature fusion processing based on the initial reconstructed image and each spliced image from bottom to top, performing image reconstruction processing on the reconstructed image after the feature fusion processing of the previous network layer and the spliced image of the current network layer as the input of a decoding space and a channel attention network layer by layer, and performing image reconstruction on the spliced image of the first shallow network layer and the feature fusion processing result of the previous network layer by adopting the decoding space and the channel attention network, wherein the obtained reconstructed image is determined to be a focus region image.
  7. 7. The method of claim 6, wherein the performing image reconstruction on the stitched image by using the decoding space of the decoding network and the channel attention network for the bottom layer network layer and the sub bottom layer network layer of the decoding network to obtain an initial reconstructed image specifically comprises: Performing stitching processing on each stitched image along the channel dimension to obtain a first stitched image; extracting the attention characteristic of the channel from the first spliced image to obtain a channel enhanced image; Performing weight calculation processing based on the channel enhanced image to obtain spatial weight; And carrying out multiplication operation processing on the basis of the space weight and the channel enhancement image to obtain the initial reconstruction image.
  8. 8. A perfusion image lesion area identification device, comprising: the preprocessing module is used for preprocessing the perfusion image to be identified to obtain first images respectively corresponding to different modal parameters; The downsampling module is used for performing downsampling feature extraction on the first image by adopting a coding network of a pre-trained image recognition model to obtain a first feature image of a deep network layer of the coding network corresponding to the first image and a second feature image of a shallow network layer of the coding network corresponding to the first image; the feature fusion module is used for carrying out feature fusion on the first feature image by adopting a feature fusion network of the image recognition model to obtain a third feature image of the deep network layer of the coding network corresponding to the first image; the splicing module is used for respectively splicing the second characteristic image or the third characteristic image of the same network layer of the coding network to obtain spliced images respectively corresponding to all network layers of the coding network; And the up-sampling module is used for carrying out up-sampling feature fusion on each spliced image by adopting a decoding network of the image recognition model to obtain a focus region recognition result of the perfusion image.
  9. 9. A storage medium storing a computer program which, when executed by a processor, performs the steps of the perfusion image lesion area identification method according to any one of claims 1-7.
  10. 10. An electronic device comprising at least a memory, a processor, said memory having stored thereon a computer program, said processor, when executing the computer program on said memory, implementing the steps of the perfusion image lesion area identification method according to any one of claims 1-7.

Description

Perfusion image focus area identification method and device, storage medium and electronic equipment Technical Field The present invention relates to the field of image processing technologies, and in particular, to a perfusion image focus area identification method and apparatus, a storage medium, and an electronic device. Background Acute Ischemic Stroke (AIS) is an important factor responsible for high mortality. AIS is often a neurological dysfunction due to thrombus or emboli occluding a blood vessel supplying a specific brain region. Vascular occlusions typically cause permanent and irreversible damage to the core brain region, while semi-shadow damage is typically reversible. Therefore, the brain core or the semi-dark area is positioned and quantified in time, and the method has clinical significance for AIS treatment, nerve function protection and life saving. Magnetic Resonance Imaging (MRI) and Computed Tomography (CT) are the most important imaging modalities for evaluating AIS. In particular, CT perfusion imaging (CTP) and Perfusion Weighted Imaging (PWI) can assess cerebral hemodynamics. Although PWI is superior to CTP in tissue radiography, the long acquisition time of PWI limits its use in acute cases. Thus, CTP is widely used to evaluate AIS due to its advantages in terms of cost, speed and availability. To obtain CTP images, a contrast agent is injected intravenously and the amount of contrast agent passing through the blood vessel is detected. The temporal intensity variation of individual voxels is analyzed to derive perfusion parameters including Time To Peak (TTP), mean time to transfer (MTT), cerebral Blood Flow (CBF), cerebral Blood Volume (CBV), and time to maximum (TMax). These perfusion maps are widely used to distinguish between the core and identify the penumbra. On the other hand, other MRI sequences, including T1 weighted imaging (T1 WI), T2 weighted imaging (T2 WI) and Diffusion Weighted Imaging (DWI), are used to evaluate AIS. In particular, the t1 weighted image compares better to tissue morphology, while the t2 weighted image is more sensitive to recognition of cerebrospinal fluid and lesions. DWI can detect white matter fiber tracts and has become a common imaging modality for stroke centers, with unique advantages in identifying acute infarcts. Comprehensive knowledge of the pathological process requires comprehensive time parameters (TTP, MTT, TMax), blood parameters (CBF, CBV), anatomical parameters (T1 WI, T2 WI), diffusion parameters (DWI). Currently, most AIS lesion segmentation methods are based on a single modality. However, a variety of imaging modalities can effectively capture pathology with complementary information. In order to effectively improve the segmentation performance, it is necessary to sufficiently integrate information of a plurality of modes. Previous studies, kumar et al designed a classifier-segmentation network to further improve model accuracy by removing redundant parts in the segmentation module input. Similarly Yousaf et al used a dense connection method to detect and classify two brain diseases simultaneously. However, both methods focus on processing the multi-modality data as a single unified input without fully utilizing unique and complementary information of modalities such as DWI and perfusion parameter maps, thus limiting generalization and performance of the model, and perfusion image lesion area identification is not accurate. Disclosure of Invention In view of the above, the present invention provides a method, a device, a storage medium and an electronic apparatus for identifying a focus area of a perfusion image, which mainly aims to solve the problem that the focus area of the perfusion image is not accurately identified at present. In order to solve the above problems, the present application provides a perfusion image focus area identification method, including: preprocessing a perfusion image to be identified to obtain first images respectively corresponding to different modal parameters; Performing downsampling feature extraction on the first image by adopting a coding network of a pre-trained image recognition model to obtain a first feature image of a deep network layer of the coding network corresponding to the first image and a second feature image of a shallow network layer of the coding network corresponding to the first image; Performing feature fusion on the first feature image by adopting a feature fusion network of the image recognition model to obtain a third feature image of the deep network layer of the coding network corresponding to the first image; respectively splicing the second characteristic image or the third characteristic image of the same network layer of the coding network to obtain spliced images respectively corresponding to all network layers of the coding network; and carrying out up-sampling feature fusion on each spliced image by adopting a decoding network of the image recognition model to ob