Search

CN-122023214-A - Low-illumination image enhancement method and system combining brightness countermeasure generation and color shift correction

CN122023214ACN 122023214 ACN122023214 ACN 122023214ACN-122023214-A

Abstract

The invention provides a low-illumination image enhancement method and system combining brightness countermeasure generation and color offset correction, comprising the following steps of 1, collecting a low-illumination image and a corresponding normal light image construction dataset to obtain a brightness component Y L and a chromaticity component C L , 2, constructing a training set for the brightness component Y L of the low-illumination image, 3, inputting the chromaticity component C L into a color offset estimation module, predicting color offset, and carrying out self-adaptive fusion through a color fusion module to correct the color offset under the low illumination, 4, reconstructing the enhanced brightness component and the corrected chromaticity component, and then converting the reconstructed brightness component and the corrected chromaticity component into an RGB space to obtain a final enhanced image, 5, carrying out loss function constraint and training optimization, and obtaining remarkable effects in the aspects of brightness improvement, structural detail maintenance, color consistency and the like.

Inventors

  • WU JING
  • DONG ZEXI
  • LIAO YIPENG
  • HUANG XUEQIN
  • LIN XIANGPENG
  • LIN YICHEN

Assignees

  • 泉州经贸职业技术学院
  • 福州大学

Dates

Publication Date
20260512
Application Date
20260131

Claims (8)

  1. 1. A low-intensity image enhancement method combining luminance countermeasure generation and color shift correction, comprising the steps of: Step 1, collecting a low-illumination image and a corresponding normal light image to construct a data set, and converting an input image into a YCbCr space to obtain a brightness component And chrominance component ; Step 2, for the luminance component of the low-luminance image Constructing a training set, training generation countermeasure network GAN based on self-regularization guidance U-Net, introducing global discriminant, local discriminant and self-feature retention loss, and treating brightness component of enhanced image Inputting the trained GAN model to generate enhanced brightness component ; Step 3, chrominance component Input color shift estimation module, predict color shift Performing self-adaptive fusion through a color fusion module, and correcting color deviation under low illumination; Step4, the enhanced brightness component And corrected chrominance components Converting back to RGB space after reconstruction to obtain final enhanced image ; And step 5, loss function constraint and training optimization, namely training a Y-component brightness enhancement network through counterloss and feature retention loss, training a CbCr-component color correction network through color consistency and fusion smoothing loss, and realizing the joint optimization of brightness enhancement and color correction by joint use after independent training of the two networks.
  2. 2. The method of low-luminance image enhancement combining luminance countermeasure generation and color shift correction according to claim 1, wherein step 2 includes the steps of constructing a Y component enhancement model Y-ENLIGHTENGAN based on the generation countermeasure network GAN to be a luminance component As input, the corresponding normal exposure luminance component Model training is performed as a supervisory signal.
  3. 3. The method for enhancing the low-illumination image by combining brightness countermeasure generation and color offset correction according to claim 2, wherein the Y-ENLIGHTENGAN constructed in the step 2 comprises a self-regularized generator, a global discriminator, a local discriminator and a self-feature retention loss, wherein the self-regularized generator takes a U-Net network as a main body and simultaneously adds self-regularized attention to perform regularization, the global discriminator is used for guiding and generating a vivid pseudo-image based on a relativistic discriminator structural design so as to enable the image to have a real natural light effect, the local discriminator is used for cutting 5 small blocks from the image randomly in training and then learning and judging the authenticity, and model training is performed by adopting the self-feature retention loss to model and restrict the feature space distance between the two images.
  4. 4. The method of low-intensity image enhancement combining luminance countermeasure generation and color shift correction as claimed in claim 1, the method is characterized in that the step 3 specifically comprises the following steps: the chrominance components of the low-light image are noted as: (1) The normal exposure chrominance components are: (2) By estimating the color shift of low-light images Correcting it to a color distribution close to normal exposure: (3) Wherein the method comprises the steps of Predicting the chroma adjustment of each pixel for a color shift estimation network for global and local analysis of the chroma distribution of the low-light image to predict the color shift 。
  5. 5. The method of claim 4, wherein in step 3, the color shift estimation network comprises: convolutional feature extraction layer for chrominance component Extracting multi-scale local features and capturing local color anomalies; A global average pooling layer for capturing global color shift trend of the whole image; Offset prediction layer mapping features to full connection layer through convolution In the above, the color correction prediction for each pixel is realized, and the mathematical expression is as follows: (4) Wherein Represents a color shift estimation function, Is a network parameter.
  6. 6. The method of claim 1, wherein the color fusion module adaptively fuses the original chrominance component with the predicted offset in step 3: (5) Wherein: representing element-by-element multiplication; for self-adaptive fusion of weight diagrams, a small convolution network is used for 。
  7. 7. The method of low-intensity image enhancement combining luminance challenge generation and color shift correction according to claim 6, wherein a plurality of loss functions are introduced in step 3, comprising: color consistency loss: (6) offset regularization loss: (7) Fusing smoothness constraint: (8) Wherein, the Expressed in the image pixel position # , ) The color fusion weight is used for adjusting the fusion proportion between the original CbCr component and the color offset corrected component, so that smooth transition of the color correction result on the spatial domain is ensured; And Pixel indexes respectively representing the image in the vertical direction and the horizontal direction; the total loss function of CbCr component enhancement is defined as: (9) Wherein, the Indicating a loss of color consistency and, Representing the offset regularization loss, Representing the fusion of the smoothness constraints, And For balancing weight coefficient, and during network training, using low-illumination chroma component For input, the chrominance components are normally exposed For supervision purposes, by minimizing the total loss function Optimizing network parameters : (10) Wherein, the Representing by minimizing the total loss function The obtained optimal parameter set of the color shift correction network is learned.
  8. 8. A low-intensity image enhancement system combining intensity countermeasure generation with color shift correction, comprising a processor, a memory and a bus, the memory storing machine-readable instructions for execution by the processor, wherein the processor and the memory communicate via the bus when the system is running, the machine-readable instructions when executed by the processor combining intensity countermeasure generation with color shift correction low-intensity image enhancement method as claimed in any one of claims 1 to 7.

Description

Low-illumination image enhancement method and system combining brightness countermeasure generation and color shift correction Technical Field The invention relates to the technical field of computer vision, in particular to a low-illumination image enhancement method and a system combining brightness countermeasure generation and color offset correction. Background In the image acquisition process, the low-illumination image generally has the problems of low brightness, poor contrast, reduced color saturation, color distortion and the like, and the accuracy of advanced visual tasks such as subsequent image recognition, target detection and the like is seriously affected. The existing low-illumination image enhancement methods, such as histogram equalization, retinex theory-based methods, image fusion-based methods and deep learning-based methods, have the advantages, but generally have the problems of poor generalization capability, easiness in generating color distortion, fuzzy texture details, complex algorithm, low efficiency and the like. Especially when processing color channel coupled RGB images, color shift and saturation anomalies are easily caused. The generation of the countermeasure network (GAN) exhibits a strong capability in terms of luminance recovery, but tends to ignore image structure information, resulting in edge blurring and detail loss, and insufficient handling of the color shift problem. Disclosure of Invention Therefore, the present invention is directed to a low-illuminance image enhancement method and system combining brightness contrast generation and color shift correction, which decouples brightness enhancement and color shift correction, improves the brightness, contrast and detail definition of an image, and simultaneously effectively improves the color distortion phenomenon, and realizes natural brightness improvement, structural detail maintenance and true color restoration. In order to achieve the above purpose, the invention adopts the following technical scheme that the low-illumination image enhancement method combining brightness countermeasure generation and color shift correction comprises the following steps: Step 1, collecting a low-illumination image and a corresponding normal light image to construct a data set, and converting an input image into a YCbCr space to obtain a brightness component And chrominance component; Step 2, for the luminance component of the low-luminance imageConstructing a training set, training generation countermeasure network GAN based on self-regularization guidance U-Net, introducing global discriminant, local discriminant and self-feature retention loss, and treating brightness component of enhanced imageInputting the trained GAN model to generate enhanced brightness component; Step 3, chrominance componentInput color shift estimation module, predict color shiftPerforming self-adaptive fusion through a color fusion module, and correcting color deviation under low illumination; Step4, the enhanced brightness component And corrected chrominance componentsConverting back to RGB space after reconstruction to obtain final enhanced image; And step 5, loss function constraint and training optimization, namely training a Y-component brightness enhancement network through counterloss and feature retention loss, training a CbCr-component color correction network through color consistency and fusion smoothing loss, and realizing the joint optimization of brightness enhancement and color correction by joint use after independent training of the two networks. In a preferred embodiment, step 2 includes the steps of constructing a Y-component enhancement model Y-ENLIGHTENGAN based on generation of an countermeasure network GAN to generate a luminance componentAs input, the corresponding normal exposure luminance componentModel training is performed as a supervisory signal. In a preferred embodiment, the Y-ENLIGHTENGAN constructed in the step 2 comprises a self-regularized generator, a global discriminator, a local discriminator and a self-feature retention loss, wherein the self-regularized generator takes a U-Net network as a main body and simultaneously adds self-regularized attention to perform regularization, the global discriminator is based on the structural design of the relativistic discriminator, guides to generate a vivid pseudo-image so as to enable the image to have a real natural light effect, the local discriminator randomly cuts 5 small blocks from the image in training and then learns and judges the authenticity of the image, and model training is performed by adopting the self-feature retention loss to model and restrict the feature space distance between two images. In a preferred embodiment, step 3 specifically includes: the chrominance components of the low-light image are noted as: (1) The normal exposure chrominance components are: (2) By estimating the color shift of low-light images Correcting it to a color distribution close