Search

CN-121837059-B - H & E image enhancement method and system based on biological mechanism and frequency attention

CN121837059BCN 121837059 BCN121837059 BCN 121837059BCN-121837059-B

Abstract

The invention discloses an H & E image enhancement method and system based on biological mechanism and frequency attention, the method firstly obtains a virtual staining data set, preprocessing an image contained in the data set to generate input image data, and synchronously dividing the data set. And secondly, constructing an H & E image virtual dyeing model based on biological tone perception and frequency separation attention, and extracting features of input image data to obtain a virtual dyed H & E image, thereby finishing enhancement. And finally, constructing a composite loss function consisting of the combination of the counterloss, the cyclical consistency loss and the identity loss, carrying out parameter optimization training on the H & E image virtual dyeing model, and carrying out test evaluation. The invention effectively improves the color consistency and stability of the virtual dyeing result while maintaining the tissue structure information, and effectively maintains the detail characteristics of pathological tissues while inhibiting the background artifact interference.

Inventors

  • YANG JIE

Assignees

  • 杭州电子科技大学

Dates

Publication Date
20260508
Application Date
20260311

Claims (5)

  1. 1. An H & E image enhancement method based on biological mechanism and frequency attention, characterized by comprising the steps of: The method comprises the steps of 1, obtaining a virtual dyeing dataset, preprocessing images contained in the dataset to generate input image data, and synchronously dividing the dataset, wherein the steps are that the virtual dyeing dataset is obtained, IHC and H & E images contained in the dataset are preprocessed, tissue region extraction operation is carried out on a full-view image, and background regions are removed; Step 2, constructing an H & E image virtual dyeing model based on biological tone perception and frequency separation attention, carrying out feature extraction on input image data to obtain a virtual dyed H & E image, and completing enhancement, wherein the H & E image virtual dyeing model is constructed in a mode of cooperative countermeasure training by a generator and a discriminator, the generator is used for realizing virtual dyeing mapping between an IHC image and the H & E image, the discriminator is used for discriminating the authenticity of a generated image and a real dyeing image, and the virtual dyeing effect is optimized together; The generator adopts an integral network architecture of an encoder-residual error connection-decoder, wherein the encoder performs feature coding and expression learning on the IHC and H & E images after preprocessing, the decoder reconstructs image representation of a target dyeing color gamut in a feature space, and feature information transmission is performed between the encoder and the decoder through residual error connection; introducing a biological tone sensing mechanism and a frequency separation attention mechanism into the generator, and carrying out targeted modeling and regulation on color information and structure information in a pathological image; The biological tone sensing mechanism utilizes an encoder introducing biological tone sensing to perform feature encoding on an input image to obtain encoding features, and a color sensing modulation structure simulating a suppression-facilitation receptive field mechanism is introduced in a feature extraction stage, so that the color discrimination capability is enhanced, and the consistency of tissue structures is maintained; the frequency separation attention mechanism is used for reconstructing coding features step by utilizing a decoder based on the frequency separation attention mechanism to generate a virtual dyeing image; And 3, constructing a composite loss function consisting of the combination of the counterdamage, the cyclical consistency loss and the identity loss, performing parameter optimization training on the H & E image virtual dyeing model, and performing test evaluation.
  2. 2. The H & E image enhancement method based on biological mechanism and frequency attention according to claim 1, wherein the biological tone sensing mechanism, the encoder introducing biological tone sensing, after extracting preliminary features of the preprocessed IHC and H & E images by the input layer, introduces a suppression-facilitation receptive field modulation module simulating the receptive characteristics of the biological vision system in the preliminary features, and performs adaptive adjustment on the preliminary features, specifically as follows: firstly, sequentially carrying out reflection filling, convolution operation, instance normalization and nonlinear activation on an input image to obtain a depth characteristic image ; Using Gaussian difference function models Indicates the response of the inhibition-facilitation receptive field, the side zone inhibition and end zone facilitation structure of the inhibition-facilitation receptive field is expressed by butterfly receptive fields with different orientations, The coordinate position is indicated and the position of the coordinates, The size of the inhibition-facilitation receptive field relative to the classical receptive field for different scales; to inhibit the inhibition range of the susceptibility field, wherein the response When it is not negative Device for placing articles Otherwise, set to zero And (3) with As the position Is to be determined by the suppression weights of (2) Depth feature map And a suppression amount weight Suppression of convolution-facilitated receptive field ; To inhibit the susceptibility range of an susceptibility field, wherein the response is Is non-negative and When located in butterfly receptive field Device for placing articles Otherwise, set to zero, where Represent the first The butterfly receptive field area with various angles is to And (3) with As the position of the ratio of Is weighted by the weight of the easy quantity , Representing depth feature maps The middle coordinates are Is to depth feature map And an easy-to-get weighting Convolution obtains one orientation of the facilitation amount result, and obtains the maximum value in the facilitation amount results of each orientation to obtain the facilitation amount of the inhibition-facilitation receptive field ; At this time, the inhibition amount And (3) with Difference, the influence parameters of the inhibition-facilitation receptive field on the classical receptive field are obtained , Representing the ratio of the number of cells in the visual cortex that exert an inhibitory effect and an easy effect; Will be of different dimensions The influence parameters of the classical receptive field are obtained by superposing the influence parameters of the inhibition-facilitation receptive field By depth profile And (3) with Characteristic response after differential inhibition-facilitation receptive field modulation , Representing the strength of the junction of the inhibition-facilitation receptive field and the classical receptive field; Response of modulated characteristics Sending the encoder output to a feature fusion layer, and obtaining the encoder output through convolution, instance normalization and nonlinear activation functions 。
  3. 3. The H & E image enhancement method based on biological mechanism and frequency attention according to claim 2, wherein the decoder based on frequency separation attention mechanism constructs a dual-channel decoding path based on frequency separation idea, including a high-frequency decoding path and a low-frequency decoding path, in the high-frequency decoding path, by performing residual difference processing on input features and progressive up-sampling results, continuously extracting structural information, in the low-frequency decoding path, by performing stable transfer and channel level response adjustment on the input features, maintaining continuity and consistency of overall color distribution of pathological images, specifically as follows: In the high-frequency decoding path, a decoder carries out up-sampling on the characteristics in a progressive reconstruction mode and utilizes differential information between progressive reconstruction results and input characteristics to generate high-frequency residual information, the differential information realizes layer-by-layer extraction and refining of high-frequency structural information by continuously subtracting the reconstruction results after progressive up-sampling and down-sampling from original input, and the channel-space joint attention module CSAM calculates weights of different channels in a high-frequency characteristic diagram by using channel attention, calculates weights of different spatial position characteristic diagrams by using spatial attention, and then adaptively carries out weighted fusion on the differential information by using two calculated weight parameters to generate the high-frequency information Extracting the step-by-step high-frequency information, finally splicing the step-by-step high-frequency information, performing deconvolution operation, and obtaining the output of the high-frequency channel through an activation function ; In the low-frequency color decoding path, a decoder stably transmits the characteristics related to the overall dyeing distribution, and a channel attention mechanism module CAM is utilized to calculate the characteristic responses of different channels in a characteristic diagram for recalibration and reweighting so as to generate low-frequency information Finally, splicing all levels of low-frequency information, performing deconvolution operation, and obtaining the output of the low-frequency channel through an activation function ; Will be And (3) with Adding elements by elements, filling by reflection, performing convolution operation, and finally adjusting the size and channel number of the output characteristic by hyperbolic tangent activation to obtain an image result with the same size as the input image to be processed H & E image enhancement is completed.
  4. 4. The H & E image enhancement method based on biological mechanism and frequency attention according to claim 3, wherein the countermeasures are used for constraining virtual staining images output by the generator to approach real staining images on overall distribution, improving authenticity and naturalness of the generated result, cyclic consistency losses are used for constraining bidirectional mapping relations between different staining domains, maintaining consistency of tissue structure information in the virtual staining process, identity losses are used for inhibiting color mapping changes, and enhancing stability of the model in terms of color maintenance.
  5. 5. H & E image enhancement system based on biological mechanism and frequency attention for implementing the H & E image enhancement method according to any of claims 1 to 4, characterized in that it comprises the following modules: the image data processing module is used for acquiring a virtual dyeing data set, preprocessing images contained in the data set, generating input image data and synchronously dividing the data set; The H & E image virtual dyeing module is used for constructing an H & E image virtual dyeing model based on biological tone perception and frequency separation attention, extracting characteristics of input image data, obtaining a virtual dyed H & E image and finishing enhancement; And the loss training module is used for constructing a composite loss function consisting of the combination of the counterloss, the cycle consistency loss and the identity loss, carrying out parameter optimization training on the H & E image virtual dyeing model, and carrying out test evaluation.

Description

H & E image enhancement method and system based on biological mechanism and frequency attention Technical Field The invention belongs to the field of medical image processing and computer vision, and particularly relates to a hematoxylin-eosin staining H & E image enhancement method and system based on biological tone perception and frequency separation attention. Background The pathological section images play an important role in clinical diagnosis and disease research, wherein hematoxylin-eosin staining can clearly show tissue structures and cell morphology, and is one of the most common staining modes in conventional pathological examination. Immunohistochemical staining IHC is used to show the expression information of specific molecules or proteins, and has stronger complementarity with H & E staining in pathological analysis. In order to improve the efficiency of pathological analysis and reduce the time and cost brought by the actual dyeing process, a virtual dyeing technology for realizing the conversion between different dyeing modes by a calculation method gradually appears in recent years, and becomes an important research direction in the field of medical image processing. With the development of deep learning and generation countermeasure networks, a pathology image virtual staining method based on a generation model is widely focused, and mapping between IHC images and H & E images can be realized without strict paired samples. However, the existing virtual staining method focuses on overall style migration or pixel distribution matching, and is insufficient in difference of different frequency information in pathological images, so that key structural features such as cell nucleus boundaries, gland contours and the like are easily damaged in the generation process. Meanwhile, in the process of block training and reconstruction, color distribution differences among different image blocks may be amplified, so that the problem of color drift or discontinuous color tone of a virtual dyeing result among adjacent areas occurs, and the overall visual consistency and the practical application effect are affected. In addition, part of the existing methods are not fully combined with biological characteristics of pathological images in the virtual dyeing process, the biological tone sensing mechanism in the dyeing process is not considered enough, and the accuracy of tissue structure maintenance and color expression is difficult to be simultaneously considered under complex tissue structures and diversified dyeing conditions. Therefore, how to effectively sense and regulate the biological tone information in the virtual dyeing process and perform differential modeling for different frequency components so as to improve the structural stability and color consistency of the virtual dyeing result is still a technical problem to be solved in the art. Disclosure of Invention The invention aims to provide an H & E image enhancement method and system based on biological mechanism and frequency attention, which are used for leading in a perception mechanism of pathological dyeing biological tone information in a generation type modeling process and carrying out differential modeling on different frequency components in an image so as to effectively maintain tissue structural characteristics and improve consistency and stability of color expression while realizing cross-dyeing mode mapping, thereby solving the problems of structure blurring, color drift, insufficient overall visual consistency and the like which are easy to occur in the existing virtual dyeing method and providing a more stable and reliable technical means for pathological image analysis and auxiliary diagnosis. To achieve the above object, in one aspect of the present invention, a method for enhancing H & E images based on biological mechanism and frequency attention is provided, comprising the steps of: Step 1, acquiring IHC and H & E data sets for virtual dyeing, preprocessing images contained in the data sets, generating input image data, and synchronously dividing the data sets; Preferably, the H & E image virtual staining dataset is obtained, comprising a set of own pathology image dataset and a set of public pathology image dataset BCI, and the IHC and H & E images contained in the dataset are preprocessed to generate image data meeting the model input requirements. Step 2, constructing an H & E image virtual dyeing model based on biological tone perception and frequency separation attention, and extracting features of input image data to obtain a virtual dyed H & E image, thereby finishing enhancement; Preferably, the H & E image virtual dyeing model is constructed by adopting a mode of cooperative countermeasure training of a generator and a discriminator, wherein the generator is used for realizing virtual dyeing mapping between the IHC image and the H & E image, and the discriminator is used for discriminating the authenticity of th