Search

CN-121810506-B - Haze image synthesis method based on depth guidance and domain alignment

CN121810506BCN 121810506 BCN121810506 BCN 121810506BCN-121810506-B

Abstract

The invention discloses a haze image synthesis method based on depth guidance and domain alignment, which comprises the following steps of firstly obtaining and processing depth information to obtain a depth normalization value, secondly aligning a real haze image with a clear image to be synthesized through a domain to obtain updated transmittance, thirdly obtaining an atmospheric light value A 'based on the real haze image, fourthly obtaining the transmittance under different atmospheric scattering coefficients based on a depth normalization value and an atmospheric scattering coefficient multi-level strategy, and obtaining optimized transmittance under different atmospheric scattering coefficients by combining updated transmittance weighting, and fifthly obtaining a haze synthesized image under different atmospheric scattering coefficients based on the atmospheric light value A' and the optimized transmittance under different atmospheric scattering coefficients. According to the invention, the haze synthetic image with the real depth effect and the natural haze concentration distribution is generated through depth information guidance and domain alignment physical model constraint, so that the diversity and the authenticity of the synthetic image are improved.

Inventors

  • SU YANZHAO
  • ZHANG LANQING
  • CUI ZHIGAO
  • WANG NIAN
  • LAN Yunwei
  • ZHU LIANGYU
  • ZHOU ZHENGYANG
  • XIAO YUANHAO

Assignees

  • 中国人民解放军火箭军工程大学

Dates

Publication Date
20260512
Application Date
20260306

Claims (5)

  1. 1. The haze image synthesis method based on depth guidance and domain alignment is characterized by comprising the following steps of: step one, obtaining and processing depth information to obtain a depth normalization value: Performing depth processing on a clear image to be synthesized by adopting a DPT-Hybrid model to obtain a depth map, and performing normalization processing on the basis of the depth map to obtain a depth normalization value; step two, aligning the real haze image and the clear image to be synthesized through a domain to obtain updated transmissivity, wherein the updated transmissivity of the jth pixel point is as follows J is a positive integer, the value of j is 1-J, and J is the total number of pixel points; Step three, obtaining an atmospheric light value A' based on a real haze image; Step four, obtaining the transmittance under different atmospheric scattering coefficients based on a depth normalization value and an atmospheric scattering coefficient multi-level strategy, and obtaining the optimized transmittance under different atmospheric scattering coefficients by combining updated transmittance weighting; step five, obtaining a haze synthetic image under different atmospheric scattering coefficients based on the atmospheric light value A' and the optimized transmittance under different atmospheric scattering coefficients; step two, the specific process is as follows: step 201, carrying out gray scale treatment on a real haze image to obtain a real haze gray scale image; step 202, constructing a first Gaussian mixture model by adopting an expected maximization algorithm aiming at a real haze gray level image Wherein, the method comprises the steps of, Data points representing the composition of pixel values on a true haze gray scale image, Representing the 1 st gaussian distribution, Representing the mixing weight of the 1 st gaussian distribution, Representing the mean value of the 1 st gaussian distribution, Representing the variance of the 1 st gaussian distribution; representing the 2 nd gaussian distribution, Representing the mixing weight of the 2 nd gaussian distribution, Representing the mean value of the 2 nd gaussian distribution, Representing the variance of the 2 nd gaussian distribution; And The value of (2) is 0-1, and ; Step 203, mixing the first Gaussian mixture model As a function of probability density, in Acquiring the cumulative distribution probability P1 of the real haze gray level image in a value interval of 0-255; step 204, processing the clear image to be synthesized according to the methods from step 201 to step 203 to obtain a second Gaussian mixture model And the cumulative distribution probability P2 of the sharp images to be synthesized; Step 205, processing the real haze image by using a dark channel prior method to obtain a real transmittance map, wherein the initial transmittance of the jth pixel point on the real transmittance map is recorded as ; Step 206, according to Obtaining the updated transmissivity of the jth pixel point ; Step four, the specific process is as follows: Step 401, atmospheric scattering coefficient Setting the atmospheric scattering coefficient to be in a range of 0.8-1.5, increasing the atmospheric scattering coefficient according to the step length of 0.1, wherein the nth atmospheric scattering coefficient is N is a positive integer, n is 1-8, and the 1 st atmospheric scattering coefficient The value of the n is 0.8, wherein when the value of the n is 1 to 7, , Is the n+1th atmospheric scattering coefficient; Step 402, according to Obtaining the transmissivity corresponding to the jth pixel point under the nth atmospheric scattering coefficient Wherein e is a natural constant; step 403, according to Obtaining the optimal transmissivity of the jth pixel point under the nth atmospheric scattering coefficient Wherein, the method comprises the steps of, The value range is 0.2-0.8 for the random weight coefficient.
  2. 2. The haze image synthesizing method based on depth guidance and domain alignment according to claim 1, wherein the specific process is as follows: Step 101, respectively extracting an R component, a B component and a G component of a clear image to be synthesized to obtain an R component image, a B component image and a G component image; 102, respectively carrying out standard normalization on each pixel value on the R component diagram, the B component diagram and the G component diagram to obtain a normalized R component diagram, a normalized B component diagram and a normalized G component diagram; Step 103, processing the normalized image by adopting a DPT-Hybrid model to obtain a depth map; 104, recording a pixel value of a j-th pixel point in the depth map as a j-th depth value dj; and 105, obtaining a j-th depth normalization value dj 'according to dj' = (dj-dmin)/(dmax-dmin), wherein dmax represents a maximum depth value on the depth map, and dmin represents a minimum depth value on the depth map.
  3. 3. The haze image synthesizing method based on depth guidance and domain alignment according to claim 1, wherein the method comprises the following specific steps: step 301, performing recursion segmentation on a gray level image corresponding to a real haze image by adopting a quadtree segmentation method to obtain each sub-region; Step 302, the average value of all pixel values in each sub-area is recorded as average brightness, and the variance of all pixel values in each sub-area is recorded as brightness variance; Step 303, in the process of recursively dividing the gray scale image corresponding to the real haze image, if a subarea with the minimum brightness variance and the highest average brightness exists, stopping dividing; step 304, taking the subarea with the minimum brightness variance and the highest average brightness as a candidate atmosphere light area; Step 305, sorting the pixel values of the pixel points in the candidate atmosphere light area from large to small, and recording the average value of the R component, the G component and the B component corresponding to the first [ 0.1%. Times.J ] pixel points on the real haze image as an atmosphere light value A' [ wherein And J is the total number of pixel points.
  4. 4. The haze image synthesizing method based on depth guidance and domain alignment according to claim 1, wherein the method comprises the following specific steps: step 501, according to an atmospheric scattering physical model Haze synthesis is carried out to obtain an updated R component of the jth pixel point under the nth atmospheric scattering coefficient Wherein, the method comprises the steps of, The R component of the j pixel point in the clear image to be synthesized is represented; step 502, according to the method of step 501, respectively processing a G component and a B component of a jth pixel point in a clear image to be synthesized, and merging the updated R component, G component and B component of the jth pixel point under the nth atmospheric scattering coefficient to obtain a haze synthesized image under the nth atmospheric scattering coefficient; and 503, repeating the steps 501 to 502 for a plurality of times, and adjusting the atmospheric scattering coefficients to obtain haze synthesized images under different atmospheric scattering coefficients.
  5. 5. The haze image synthesizing method based on depth guidance and domain alignment according to claim 1, wherein after the fifth step, color correction and detail enhancement processing are performed on the haze synthesized image, specifically comprising the following steps: a1, converting a haze synthetic image from an RGB space to an LAB color space to obtain a brightness component, an A component and a B component, and recording the brightness component, the A component and the B component as a first brightness component, a first A component and a first B component; A2, converting the real haze image from an RGB space to an LAB color space to obtain a brightness component, an A component and a B component, and recording the brightness component, the A component and the B component as a second A component and a second B component; a3, carrying out histogram matching on the histogram of the first A component based on the histogram of the second A component to obtain a remapped first A component; based on the histogram of the second B component, performing histogram matching on the histogram of the first B component to obtain a remapped first B component; Step A4, converting the first brightness component, the remapped first A component and the remapped first B component back to an RGB color space to obtain a remapped haze composite image; and A5, enhancing the remapped haze synthetic image by adopting an edge perception enhancement algorithm based on guide filtering, and obtaining an enhanced haze synthetic image.

Description

Haze image synthesis method based on depth guidance and domain alignment Technical Field The invention belongs to the technical field of haze image synthesis, and particularly relates to a haze image synthesis method based on depth guidance and domain alignment. Background With the wide application of computer vision technology in the fields of automatic driving, video monitoring, remote sensing mapping and the like, image quality becomes a key factor affecting the performance of a vision system. However, under severe weather conditions such as fog, haze and the like, suspended particles in the atmosphere can generate absorption and scattering effects on light rays, so that problems such as contrast reduction, color distortion, detail loss and the like of acquired images occur, and the reliability and the robustness of a vision system are seriously restricted. Currently, deep learning-based image defogging methods have demonstrated superior performance over multiple reference data sets, but the effectiveness of these methods is largely dependent on large-scale high-quality training data. The acquisition of paired clear images and haze day images in a real scene is very challenging, and the haze day conditions of different regions and different seasons have significant differences, so that the construction of a real haze day data set with enough diversity becomes extremely difficult. This data bottleneck severely constrains the further development and practical application of defogging networks. The existing image synthesis method for the haze day is mainly divided into two types, namely a method based on a physical model and a method based on data driving. The method based on the physical model mainly synthesizes the haze day image by estimating the transmissivity graph and the atmospheric light value according to the atmospheric scattering model. However, simplifying assumptions, such as uniform fog concentration distribution, are often employed in the transmittance estimation, ignoring the geometric information of the scene, resulting in a lack of realism in the resultant haze day image. Based on a data driving method, if an countermeasure network is generated, the distribution characteristics of the haze day images can be learned from the data, and the haze day images with relatively real vision can be generated. However, such methods lack explicit physical constraints and the resulting image may be deficient in terms of physical rationality. In addition, the existing synthetic haze day image dataset still has a gap from a real haze image in terms of visual sense realism and physical rationality. This results in defogging networks trained based on these data sets performing poorly in real scenes with limited generalization capability. Therefore, a haze image synthesis method based on depth guidance and domain alignment, which is simple in structure and reasonable in design, is lacking at present, and a haze synthesized image with a real depth of field effect and natural fog concentration distribution is generated through the constraint of depth information guidance and domain alignment physical model, so that the diversity and the authenticity of the synthesized image are improved, and further the training effect and the generalization capability of a follow-up defogging network are improved. Disclosure of Invention Aiming at the defects in the prior art, the invention provides the haze image synthesis method based on depth guidance and domain alignment, which has the advantages of simple steps and reasonable design, generates the haze synthesized image with real depth effect and natural haze concentration distribution through the constraint of depth information guidance and domain alignment physical model, improves the diversity and the authenticity of the synthesized image, and further improves the training effect and the generalization capability of a follow-up defogging network. In order to solve the technical problems, the technical scheme adopted by the invention is that the haze image synthesis method based on depth guidance and domain alignment comprises the following steps: step one, obtaining and processing depth information to obtain a depth normalization value: Performing depth processing on a clear image to be synthesized by adopting a DPT-Hybrid model to obtain a depth map, and performing normalization processing on the basis of the depth map to obtain a depth normalization value; step two, aligning the real haze image and the clear image to be synthesized through a domain to obtain updated transmissivity, wherein the updated transmissivity of the jth pixel point is as follows J is a positive integer, the value of j is 1-J, and J is the total number of pixel points; Step three, obtaining an atmospheric light value A' based on a real haze image; Step four, obtaining the transmittance under different atmospheric scattering coefficients based on a depth normalization value and an atmospheric s