CN-120876314-B - Low-light medical image enhancement method, system, equipment and medium based on brightness guidance and detail recovery
Abstract
The application belongs to the technical field of image processing and the technical field of computer-aided medical diagnosis, and discloses a low-light medical image enhancement method, a system, equipment and a medium based on brightness guidance and detail recovery, wherein the method comprises the steps of processing an input low-light image through an initial brightness adjuster to obtain an initial brightness image and corresponding image characteristics thereof; the initial brightness map and the corresponding image features are input into a detail restorer guided by dynamic brightness, the initial brightness map is optimized through a dynamic brightness refinement module, meanwhile, the image structure texture and detail information are restored by utilizing a dual-stage frequency domain perception attention unit, an enhanced image is output, and the enhanced image and a pairing reference image are restrained through multi-frequency consistency loss. The application realizes the enhancement of medical images with more naturalness and better effect, and provides high-quality image input for intelligent analysis and clinical interpretation of medical images.
Inventors
- YUE GUANGHUI
- LIU GUANQING
- LI WENTAO
- QIU JIANJUN
- ZHOU TIANWEI
Assignees
- 深圳大学
Dates
- Publication Date
- 20260508
- Application Date
- 20250716
Claims (9)
- 1. A method of low-light medical image enhancement based on intensity guidance and detail recovery, the method comprising: processing the input low-light image through an initial brightness adjuster to obtain an initial brightness image and corresponding image characteristics thereof; inputting the preliminary brightness map and the corresponding image features into a detail restorer guided by dynamic brightness, optimizing the preliminary brightness map through a dynamic brightness refinement module, restoring image structure textures and detail information by utilizing a dual-stage frequency domain perception attention unit, and outputting an enhanced image; combining the initial brightness adjuster and the dynamic brightness guided detail restorer to form an image enhancement model, training the image enhancement model, and realizing low-light medical image enhancement by the trained image enhancement model, wherein in the training process, the enhancement image and the pairing reference image are restrained through multi-domain consistency loss; The dynamic brightness refinement module is integrated in each stage of the encoder and the decoder, and the input of each stage is different, and the initial brightness map is optimized by the dynamic brightness refinement module, comprising: In the first place Brightness map of layer Image features corresponding to the same As input to a dynamic luminance refinement module; In the first place Brightness map of layer Under the self-adaptive guidance of (a), extracting updated image features through a dual-stage frequency domain sensing attention unit ; Calculation of And (3) with Generating a difference map; carrying out average pooling operation on the difference map, and extracting brightness change of the region ; In the encoder, it will Directly for guiding feature enhancement; In the decoder, it will Luminance map connected with jump Fusing to obtain an enhanced luminance graph, and combining the enhanced luminance graph with the enhanced luminance graph Adding to generate a thinned luminance map 。
- 2. The method of claim 1, wherein the dual-stage frequency domain perceptual attention unit comprises a spatial pilot enhancement module and a frequency perceptual enhancement module, a frequency perceptual feed-forward network being disposed between the spatial pilot enhancement module and the frequency perceptual enhancement module, the frequency perceptual enhancement module being followed by a frequency perceptual feed-forward network; the method for recovering the texture and detail information of the image structure by using the dual-stage frequency domain sensing attention unit comprises the following steps: Characterised by the image And luminance map As input to the spatial guidance enhancement module Performing convolution processing to generate a first query tensor First key tensor And a first value tensor By a first value tensor And brightness map Generating a brightness perception feature by multiplying the brightness perception feature element by element, and connecting the brightness perception feature with a residual error through convolution projection to obtain a first feature output by a space guiding enhancement module; Using the first characteristic as input to the frequency-aware feedforward network, encoding the first characteristic by convolution Obtaining a second characteristic And second characteristic Divided into two parallel branches, respectively a first branch And a second branch A first branch Calculating a third characteristic G by a squeeze-excitation mechanism in the frequency domain, and combining the third characteristic G with the first branch Splicing and fusing by 1X 1 convolution to obtain fourth characteristic A second branch Preserving complementary spatial details not explicitly encoded in the frequency enhancement path by providing in the fourth feature And a second branch Gate modulation therebetween to obtain a fifth feature ; Generating a second query tensor by convolution with the fifth feature Z as input to the frequency-aware enhancement module Second key tensor And a second value tensor Tensor of second query And a second key tensor The tensor is remodelled and divided into blocks, each block is converted into a frequency domain to calculate the attention of the frequency domain, then the space alignment is restored through inverse FFT, an adjusted second query tensor and a second key tensor are obtained, and the brightness map is used for obtaining the brightness map Modulation of Based on the modulated second query tensor Second key tensor And a second value tensor And obtaining the output characteristics of the frequency perception enhancement module.
- 3. The method of claim 2, wherein in the spatial guidance enhancement module, a first query tensor is formulated by First key tensor And a first value tensor : (1) (2) In the formula, A3 x 3 depth convolution is represented, Representing a1 x1 convolution, split represents a Split operation; By a first value tensor And brightness map The calculation formula of the first feature output by the space guiding enhancement module is obtained by the convolution projection and residual connection of the brightness perception feature, wherein the calculation formula is as follows: (3) (4) Where As represents the attention weight, softmax represents the normalized exponential function, T represents the weight transpose, Representing the multiplication by element, Representing the scaling factor.
- 4. The method of claim 2, wherein the first branch The calculation formula for calculating the third characteristic G by the squeeze-excitation mechanism in the frequency domain is: (5) Where SE denotes the extrusion-excitation module, Representing the fourier transform.
- 5. The method of claim 2, wherein the modulation of the second key tensor is achieved by the following formula: (6) in the formula, Representing the inverse FFT of the data in the data set, Representing element-by-element multiplication, E represents a modulated key tensor, Representing the fourier transform of the signal, And the frequency domain key characteristic block obtained after the second key tensor is remodelled and segmented in the frequency perception enhancement module is represented.
- 6. The method of any one of claims 1 to 5, wherein the multi-domain consistency loss comprises a spatial consistency loss and a frequency domain consistency loss, wherein the spatial consistency loss is expressed as: (7) Where L SF is the loss of spatial consistency, Is the number of pictures, t is the index of the picture, Is the enhanced image of the t-th picture, Is the reference image of the t-th picture; The frequency domain consistency loss comprises an amplitude loss, a phase loss and a spectrum structure similarity loss, and the frequency domain consistency loss is respectively shown in the following formulas (8) - (10): (8) (9) (10) where L Amp 、L Pha and L Spe-SSIM represent the amplitude loss, phase loss and spectral structure similarity loss, respectively, The number of channels representing an image, l, h and s each represent an index of a channel of the image, 、 Is to enhance the amplitude components of the image and the reference image in the first channel, 、 Is to enhance the phase components of the image and the reference image in the h-th channel, 、 Is the amplitude component of the enhanced image and the reference image in the s-th channel, SSIM represents the similarity function.
- 7. A low-light medical image enhancement system based on intensity guidance and detail restoration, the system comprising: the primary processing module is configured to process the input low-light image through the initial brightness adjuster to obtain a primary brightness image and corresponding image characteristics thereof; the image enhancement module is configured to input the preliminary brightness map and the corresponding image features thereof into a dynamic brightness guided detail restorer, optimize the initial brightness map through the dynamic brightness refinement module, restore the image structure texture and detail information by utilizing the dual-stage frequency domain perception attention unit and output an enhanced image; The model training module is configured to combine the initial brightness adjuster and the dynamic brightness guided detail restorer to form an image enhancement model, train the image enhancement model and realize low-light medical image enhancement by the trained image enhancement model, wherein in the training process, the enhancement image and the pairing reference image are restrained through multi-domain consistency loss; the dynamic brightness refinement module is integrated in each stage of the encoder and decoder, and the inputs of each stage are different, the image enhancement module is further configured to: In the first place Brightness map of layer Image features corresponding to the same As input to a dynamic luminance refinement module; In the first place Brightness map of layer Under the self-adaptive guidance of (a), extracting updated image features through a dual-stage frequency domain sensing attention unit ; Calculation of And (3) with Generating a difference map; carrying out average pooling operation on the difference map, and extracting brightness change of the region ; In the encoder, it will Directly for guiding feature enhancement; In the decoder, it will Luminance map connected with jump Fusing to obtain an enhanced luminance graph, and combining the enhanced luminance graph with the enhanced luminance graph Adding to generate a thinned luminance map 。
- 8. An electronic device, the electronic device comprising: A memory for storing a computer program; A processor for executing the computer program to implement the method of any one of claims 1 to 6.
- 9. A non-transitory computer readable storage medium storing instructions which, when executed by a processor, perform the method of any one of claims 1 to 6.
Description
Low-light medical image enhancement method, system, equipment and medium based on brightness guidance and detail recovery Technical Field The application relates to the technical field of image processing and the technical field of computer-aided medical diagnosis, in particular to a low-light medical image enhancement method, system, equipment and medium based on brightness guidance and detail recovery. Background Medical images play an important role in disease identification, diagnosis and clinical decisions. However, due to the limitations of device performance, operation errors and patient factors, images are often collected in a low-illumination environment, resulting in insufficient brightness, uneven illumination and low contrast, which seriously weakens the expression of key diagnosis information and affects the accuracy and reliability of clinical judgment. Therefore, there is a need to develop a high-efficiency and practical low-light medical image enhancement method, which can improve brightness and structural contrast, maintain the authenticity of anatomical structures and restore color fidelity, and provide high-quality image input for intelligent analysis and clinical interpretation. Currently, some low-light image enhancement methods often use static illumination prior to identify underexposed areas, such as inverse luminance maps or strategies for manual design such as RGB channel mean estimation. Although the method is simple in calculation and has a certain intuitiveness, in practical application, the illumination estimation is inaccurate and the enhancement effect is limited due to the fact that image structure information is ignored and noise interference is easy to occur. In order to improve the quality of low-light images, a great deal of research has been conducted in recent years to introduce a mechanism of attention aimed at adaptively focusing on information-rich areas, thereby enhancing local contrast and highlighting critical structures. However, most existing methods model only in the spatial domain, failing to fully mine the supplemental information in the frequency domain. Neglecting the frequency characteristics will affect the model's ability to model locally and globally, reducing the enhancement quality. For this reason, some researches begin to attempt to introduce frequency domain modeling to make up for the deficiency of spatial modeling, but still have problems of spatial and frequency feature separation processing, unified modeling framework lacking cross-domain consistency, and the like, and limit further application thereof in medical scenes. Disclosure of Invention In order to solve the problems, the application provides a low-light medical image enhancement method, a system, equipment and a medium based on brightness guidance and detail restoration, so as to solve the problems of insufficient brightness and blurred details of medical images in a low-light environment. According to a first aspect of the present application, there is provided a low-light medical image enhancement method based on brightness guidance and detail restoration, the method comprising: processing the input low-light image through an initial brightness adjuster to obtain an initial brightness image and corresponding image characteristics thereof; inputting the preliminary brightness map and the corresponding image features into a detail restorer guided by dynamic brightness, optimizing the preliminary brightness map through a dynamic brightness refinement module, restoring image structure textures and detail information by utilizing a dual-stage frequency domain perception attention unit, and outputting an enhanced image; And combining the initial brightness regulator and the dynamic brightness guided detail restorer to form an image enhancement model, training the image enhancement model, and realizing low-light medical image enhancement by the trained image enhancement model, wherein in the training process, the enhancement image and the pairing reference image are restrained through multi-domain consistency loss. Further, the dynamic brightness refinement module is integrated in each stage of the encoder and the decoder, and inputs of each stage are different, and the dynamic brightness refinement module optimizes an initial brightness map, including: With luminance map of layer i-1 And the corresponding image features F (i-1) are used as the input of the dynamic brightness refinement module; luminance map at layer i-1 Under the self-adaptive guidance of (1), extracting updated image features F (i) through a dual-stage frequency domain sensing attention unit; Calculating the difference value between F (i) and F (i-1) to generate a difference map; carrying out average pooling operation on the difference map, and extracting brightness change of the region In the encoder, it willDirectly for guiding feature enhancement; In the decoder, it will Luminance map connected with jumpFusing to obta