CN-116703759-B - Image defogging method and system
Abstract
The invention provides an image defogging method and system, which can defog a foggy image in real time with high precision. The method comprises the steps of judging whether a foggy day image is a thick fog or a thin fog image, carrying out image defogging by adopting a corresponding method A or B according to a judging result, selecting only the first 0.1% of pixels with the largest dark channel value in 1/4 area in a given single foggy day image, calculating the atmospheric light intensity by using the average value of pixels mapped to the corresponding position of the foggy image, roughly estimating the transmissivity of the pixels by using an improved formula to obtain a rough transmissivity image, taking an original image converted into a gray image as a guide image, taking the rough transmissivity image as an input image to obtain a thinned transmissivity image, recovering an image without fog based on the image, and carrying out training and feature extraction by using an improved K estimation module in the method B to obtain parameters required by recovering the image without fog so as to generate the image without fog.
Inventors
- LI JING
- FU JINGYI
- YE WEIJIAN
- LIU TIANPENG
- Song Beihang
Assignees
- 武汉大学
Dates
- Publication Date
- 20260512
- Application Date
- 20230525
Claims (10)
- 1. An image defogging method, characterized by comprising the steps of: step I, judging that the foggy day image is a captured image under the condition of dense fog or mist; step II, image defogging is carried out by adopting a corresponding method according to a judging result, namely, adopting an image defogging improvement method A based on a dark channel priori algorithm for the thick fog situation, and adopting an AOD-Net image defogging improvement method B for the thin fog situation; the image defogging improvement method A based on the dark channel prior algorithm comprises the following steps: step A1, for a given single foggy day image, processing three color channels R, G and B in the image by using a minimum filter respectively to calculate dark channel values of the image; Step A2, based on the dark channel value calculated in the step A1, for a given single Zhang Wu-day image, selecting the first 0.1% of pixels with the largest dark channel value in 1/4 area in the image, and calculating the atmospheric light intensity by using the average value of the pixels mapped to the corresponding positions of the foggy image; step A3, for a given single foggy day image, roughly estimating the transmittance of the image by using an improved formula to obtain a rough transmittance graph: (A3) Wherein, correction coefficient ω=0.9; Representing a minimum filter; Representing the minimum of the three color channels; the upper mark c represents three color channels, and A c represents the atmospheric light intensity corresponding to the channels; Step A4, regarding a given single foggy day image, taking an original image converted into a gray image as a guide image, taking the rough transmittance image obtained in the step A3 as an input image, and obtaining an output image as a thinned transmittance image; step A5, defogging the foggy day image based on the data obtained in the step A2 and the step A4, and recovering a foggy image; The AOD-Net image defogging improvement method B comprises the following steps: step B1, training by using an improved K estimation module in the model training process; the improved K estimation module comprises five convolution layers, wherein the convolution kernel sizes are respectively 1 multiplied by 1,3 multiplied by 3, 5 multiplied by 5, 7 multiplied by 7 and 3 multiplied by 3, the outputs of the convolution layer 1 and the convolution layer 2 are connected together and used as the input of the convolution layer 3 after the spatial random inactivation operation is used, the outputs of the convolution layer 2 and the convolution layer 3 are connected together and used as the input of the convolution layer 4, and the outputs of the convolution layers 1, 2, 3 and 4 are connected together and used as the input of the convolution layer 5; Step B2, extracting characteristics of the foggy day images by using an improved K estimation module based on the training result of the step B1 to obtain parameters required for recovering the foggy image; And B3, generating a haze-free image by using the haze-free image generating module in the AOD algorithm and the parameters obtained in the step B2.
- 2. The image defogging method according to claim 1, wherein: in the step I, the judgment is carried out according to the atmospheric scattering coefficient beta, wherein when beta is more than or equal to 2.7, the judgment is a thick fog condition, and when beta is less than 2.7, the judgment is a thin fog condition.
- 3. The image defogging method according to claim 1, wherein: In step B1, the following loss function is used instead of the original loss function of the AOD algorithm: (B1) where x i is a predicted value, y i is a target value, n is the total number of predicted values or target values, and δ is a super parameter.
- 4. A method of defogging an image according to claim 3, wherein: in step B1, δ=1.0.
- 5. An image defogging method, characterized by comprising the steps of: Step A1, for a given single foggy day image, processing three color channels R, G, B in the image by using a minimum filter with a window radius r=15 to calculate dark channel values of the image; Step A2, based on the dark channel value calculated in the step A1, for a given single Zhang Wu-day image, selecting the first 0.1% of pixels with the largest dark channel value in 1/4 area in the image, and calculating the atmospheric light intensity by using the average value of the pixels mapped to the corresponding positions of the foggy image; step A3, for a given single foggy day image, roughly estimating the transmittance of the image by using an improved formula to obtain a rough transmittance graph: (A3) Wherein, correction coefficient ω=0.9; Representing a minimum filter; Representing the minimum of the three color channels; the upper mark c represents three color channels, and A c represents the atmospheric light intensity corresponding to the channels; Step A4, regarding a given single foggy day image, taking an original image converted into a gray image as a guide image, taking the rough transmittance image obtained in the step A3 as an input image, and obtaining an output image as a thinned transmittance image; step A5, defogging the foggy day image based on the data obtained in the step A2 and the step A4, and recovering a foggy image; in the step A4, the specific process of the transmittance map refinement step is as follows: 1) Downsampling the input image p and the guide image I, wherein the downsampling ratio is s, and the downsampling results are p 'and I'; 2) The method comprises the steps of carrying out average filtering with the filtering radius of r 'on p' and I ', wherein the results are mean p and mean I respectively, and carrying out average filtering with the filtering radius of r' on the values of I 'point multiplication I' and I 'point multiplication p' respectively, and the results are corr I and corr Ip respectively; 3) Calculating a variance var I of I 'and covariance cov Ip of I' and p, wherein a variance calculation formula and a covariance calculation formula are var I = corr I - mean I .* mean I , cov Ip = corr Ip – mean I .* mean p ; respectively, and 'are point multiplication operations'; 4) Calculating a= cov Ip ./ (var I + ε) , b = mean p – a .* mean I , wherein epsilon is a regularization parameter controlling smoothness, and "/" is a dot division operation corresponding to dot multiplication; 5) Respectively carrying out mean filtering on the a and the b with a filtering radius r', wherein the results are mean a and mean b respectively; 6) Up-sampling the mean a and mean b with a ratio s; 7) Substituting the up-sampled mean a and mean b into a formula q=mean a .* I + mean b , and finally obtaining q which is an output image after guide filtering; the linear combination characteristic of the formula q=mean a .* I + mean b in the model enables the structure of the pilot image I to be transformed into the output p during filtering.
- 6. The image defogging method according to claim 5, wherein: The defogging method is used for defogging the foggy day image under the condition that the atmospheric scattering coefficient beta is more than or equal to 2.7.
- 7. An image defogging method, characterized by comprising the steps of: step B1, training by using an improved K estimation module in the model training process; the improved K estimation module comprises five convolution layers, wherein the convolution kernel sizes are respectively 1 multiplied by 1,3 multiplied by 3, 5 multiplied by 5, 7 multiplied by 7 and 3 multiplied by 3, the outputs of the convolution layer 1 and the convolution layer 2 are connected together and used as the input of the convolution layer 3 after the spatial random inactivation operation is used, the outputs of the convolution layer 2 and the convolution layer 3 are connected together and used as the input of the convolution layer 4, and the outputs of the convolution layers 1, 2, 3 and 4 are connected together and used as the input of the convolution layer 5; Step B2, extracting characteristics of the foggy day images by using an improved K estimation module based on the training result of the step B1 to obtain parameters required for recovering the foggy image; Step B3, generating a haze-free image by utilizing the haze-free image generating module in the AOD algorithm and the parameters obtained in the step B2; in step B1, the improved K estimation module and the defogging image generation module form a network, and the defogging image is calculated and recovered according to the deformed atmospheric scattering model, wherein the deformed atmospheric scattering model formula is defined as follows: (B1-1) (B1-2) the K estimation module is a key structure of the network and is responsible for calculating the value of K (x) in formulas (B1-1) and (B1-2) and transmitting the parameter to the next module to recover the haze-free image; The initial input is an defogging image x, and parameters K (x) required for recovering the defogging image are obtained after calculation by a K estimation module; The following loss function is used to replace the original loss function of the AOD algorithm: (B1) where x i is a predicted value, y i is a target value, n is the total number of predicted values or target values, and δ is a super parameter.
- 8. The image defogging method according to claim 7, wherein: the defogging method is used for defogging a foggy day image under the condition that the atmospheric scattering coefficient beta < 2.7 is thin.
- 9. An image defogging system, comprising: A judging unit that judges whether the foggy day image is a foggy or a foggy image; The defogging part adopts corresponding units to carry out image defogging treatment according to the judging result, namely, adopts an image defogging improvement unit based on a dark channel prior algorithm for the thick fog situation, and adopts an AOD-Net image defogging improvement unit for the thin fog situation; the image defogging improvement unit based on the dark channel prior algorithm performs image defogging processing according to the following steps A1-A5: step A1, for a given single foggy day image, processing three color channels R, G and B in the image by using a minimum filter respectively to calculate dark channel values of the image; Step A2, based on the dark channel value calculated in the step A1, for a given single Zhang Wu-day image, selecting the first 0.1% of pixels with the largest dark channel value in 1/4 area in the image, and calculating the atmospheric light intensity by using the average value of the pixels mapped to the corresponding positions of the foggy image; step A3, for a given single foggy day image, roughly estimating the transmittance of the image by using an improved formula to obtain a rough transmittance graph: (A3) Wherein, correction coefficient ω=0.9; Representing a minimum filter; Representing the minimum of the three color channels; the upper mark c represents three color channels, and A c represents the atmospheric light intensity corresponding to the channels; Step A4, regarding a given single foggy day image, taking an original image converted into a gray image as a guide image, taking the rough transmittance image obtained in the step A3 as an input image, and obtaining an output image as a thinned transmittance image; step A5, defogging the foggy day image based on the data obtained in the step A2 and the step A4, and recovering a foggy image; The AOD-Net image defogging improvement unit performs image defogging treatment according to the following steps B1-B3: step B1, training by using an improved K estimation module in the model training process; the improved K estimation module comprises five convolution layers, wherein the convolution kernel sizes are respectively 1 multiplied by 1,3 multiplied by 3, 5 multiplied by 5, 7 multiplied by 7 and 3 multiplied by 3, the outputs of the convolution layer 1 and the convolution layer 2 are connected together and used as the input of the convolution layer 3 after the spatial random inactivation operation is used, the outputs of the convolution layer 2 and the convolution layer 3 are connected together and used as the input of the convolution layer 4, and the outputs of the convolution layers 1, 2, 3 and 4 are connected together and used as the input of the convolution layer 5; Step B2, extracting characteristics of the foggy day images by using an improved K estimation module based on the training result of the step B1 to obtain parameters required for recovering the foggy image; Step B3, generating a haze-free image by utilizing the haze-free image generating module in the AOD algorithm and the parameters obtained in the step B2; And the control part is communicated with the judging part and the defogging part and controls the operation of the judging part and the defogging part.
- 10. The image defogging system of claim 9, further comprising: and the input display part is in communication connection with the control part and is used for enabling a user to input an operation instruction and correspondingly display the operation instruction.
Description
Image defogging method and system Technical Field The invention belongs to the technical field of computer vision, and particularly relates to an image defogging method and system. Background The defogging of the image aims to remove particles with different materials, sizes, shapes and concentrations, such as smoke, fog and the like, contained in the atmosphere by utilizing a series of means on the basis of the image shot under the severe weather condition. Removal of these particulates is collectively referred to as defogging. In recent years, technologies such as automatic driving of automobiles and the like are continuously innovated and developed, and in the implementation process of an automatic driving algorithm, the identification of traffic sign pictures on driving road conditions is an important one. When haze weather occurs, indication marks and traffic lights on a road may be blurred, and conditions such as unrecognizable or erroneous recognition easily occur, and very serious consequences may occur. Therefore, effective defogging is necessary for images such as traffic signs in foggy days. A number of different image defogging algorithms have been proposed, such as color recovery, degradation model-based methods, optical model-based methods, etc. These algorithms are designed and implemented mainly based on a priori knowledge, physical model, deep learning, etc. different methods. Researchers have designed a number of efficient and effective image defogging algorithms based on convolutional neural networks, generating different deep learning models of the countermeasure network, etc. Besides the traditional image defogging tasks in indoor and outdoor scenes, more and more researchers begin to explore more application scenes, and the tasks have higher requirements on the robustness and the instantaneity of algorithms, so that a wider space is provided for the research in the image defogging field. Although researchers have proposed a plurality of algorithms, the algorithms meet the high-precision requirement of the unrealistic application scene in the aspect of image defogging instantaneity, and particularly the image defogging of traffic signals in foggy days has more strict requirements in the aspects of instantaneity and the like. Disclosure of Invention The present invention has been made to solve the above-mentioned problems, and an object of the present invention is to provide an image defogging method and system capable of defogging an image photographed in foggy days, particularly, a foggy-day traffic signal sign image in real time with high precision. In order to achieve the above object, the present invention adopts the following scheme: < method one > The invention provides an image defogging method, which comprises the following steps: step I, judging that the foggy day image is a captured image under the condition of dense fog or mist; step II, image defogging is carried out by adopting a corresponding method according to a judging result, namely, adopting an image defogging improvement method A based on a dark channel priori algorithm for the thick fog situation, and adopting an AOD-Net image defogging improvement method B for the thin fog situation; the image defogging improvement method A based on the dark channel prior algorithm comprises the following steps: Step A1, for a given single foggy day image, processing three color channels R, G, B in the image by using a minimum filter (window radius r=15) respectively to calculate dark channel values of the image; Step A2, based on the dark channel value calculated in the step A1, for a given single foggy day image, only selecting the first 0.1% of pixels with the largest dark channel value in 1/4 area in the image, and calculating the atmospheric light intensity by using the average value of the pixels mapped to the corresponding positions of the foggy image; step A3, for a given single foggy day image, roughly estimating the transmittance of the image by using an improved formula to obtain a rough transmittance graph: Wherein, correction coefficient ω=0.9; Representing a minimum filter; The method comprises the steps of representing the minimum value in three color channels, ic (y) representing a foggy image to be processed, superscript c representing the three color channels, A c representing the atmospheric light intensity corresponding to the channels; Step A4, regarding a given single foggy day image, taking an original image converted into a gray image as a guide image, taking the rough transmittance image obtained in the step A3 as an input image, and obtaining an output image as a thinned transmittance image; step A5, defogging the foggy day image based on the data obtained in the step A2 and the step A4, and recovering a foggy image; The AOD-Net image defogging improvement method B comprises the following steps: step B1, training by using an improved K estimation module in the model training process; the improved K estim