Search

CN-121685338-B - Remote sensing image-oriented multitasking collaborative iteration defogging method

CN121685338BCN 121685338 BCN121685338 BCN 121685338BCN-121685338-B

Abstract

The application discloses a remote sensing image-oriented multitasking collaborative iteration defogging method, and belongs to the technical field of image recovery based on computer vision. The method comprises the steps of 1, obtaining a remote sensing image data set, 2, inputting a remote sensing defogging image of a training set into a defogging network, 3, inputting the remote sensing defogging image of the training set, a blue gradient priori feature map and a depth priori feature map into a cloud detection network together, 4, calculating a remote sensing defogging image and a cloud probability map to obtain cloud sensing reconstruction loss, 5, combining the defogging network obtained through iterative training with the cloud detection network to obtain a defogging network frame, and 6, selecting the remote sensing defogging image in the test set of the step 1 as the input of the defogging network frame, evaluating and outputting a test result. Experimental results show that the method is superior to the existing similar methods in terms of objective indexes and subjective visual evaluation.

Inventors

  • JU MINGYE
  • WANG HAN
  • LIU QINGSHAN

Assignees

  • 南京邮电大学

Dates

Publication Date
20260508
Application Date
20260212

Claims (7)

  1. 1. The multi-task cooperative iteration defogging method for the remote sensing image is characterized by comprising the following steps of: Step 1, acquiring a remote sensing image data set, wherein the remote sensing image data set comprises a test set and a training set; Step 2, inputting the remote sensing defogging images of the training set to a defogging network iGLD-Net to obtain remote sensing defogging images and depth priori feature images ; Step 3, remote sensing foggy images and depth priori feature images of the training set are obtained Commonly input into a cloud detection network CD-Net, then modulated by a four-layer prior gating module MGFF, wherein the four-layer prior gating module MGFF comprises four cascaded prior gating modules MGFF, and the prior gating module MGFF at the first stage receives an initial image feature map through a main branch thereof And for the initial image feature map Performing pixel-depth convolution and nonlinear activation processing, and performing prior branch processing on an initial mixed prior feature map Performing point-by-point convolution and grouping depth convolution to generate group gating weight, performing element product modulation by using the group gating weight to obtain a first-stage modulated feature map of main branch output Initial hybrid prior feature map Sum group gating weights First-stage mixed prior feature map subjected to splicing and convolution output updating The prior gating module MGFF at the second stage modulates the feature map with the first stage As the input feature map of its main branch, the first-stage mixed prior feature map As the input of the prior branch, repeatedly executing the processing and modulation operations of the main branch and the prior branch, and finally outputting a cloud probability map predicted by the cloud detection network CD-Net by the prior gating module MGFF at the fourth stage; Step 4, obtaining cloud perception reconstruction loss by calculating the remote sensing defogging image obtained in the step 2 and the cloud probability map obtained in the step 3 And then calculating the frequency domain reciprocal consistency loss by utilizing the frequency domain learnable mist adding module LFHM and the frequency domain learnable mist removing module LFDM in the mist removing network iGLD-Net ; Step 5, sequentially repeating the step 2, the step 3 and the step 4, regarding one iteration process for each cycle, performing iteration training until the preset times, and then merging the defogging network iGLD-Net obtained through the iteration training and the cloud detection network CD-Net to obtain a defogging network framework ICDF; And 6, selecting the remote sensing foggy image in the test set in the step 1 as the input of the defogging network framework ICDF, evaluating and outputting a test result.
  2. 2. The remote sensing image-oriented multitasking collaborative iterative defogging method according to claim 1, wherein the step 1 specifically comprises the following steps: Step 11, cloud detection is carried out on the remote sensing foggy images in the training set through a traditional threshold method, manual accurate adjustment is carried out after rough labeling is carried out on cloud layer areas, and a cloud mask training sample corresponding to the remote sensing foggy images is constructed, wherein pixels corresponding to the cloud layer areas in the cloud mask training sample are marked as 1, and pixels corresponding to the rest cloud-free areas are marked as 0; and step 12, dividing the test set into two types, namely a synthetic image test set and a real image test set.
  3. 3. The remote sensing image-oriented multi-task collaborative iterative defogging method according to claim 2, wherein in step 2, a defogging network iGLD-Net adopts a mixed domain architecture and comprises two light-weight encoder-decoder network branches, the mixed domain architecture is realized by a learnable frequency domain defogging module LFDM and a learnable frequency domain defogging module LFHM, the two light-weight encoder-decoder network branches are respectively M-Net and G-Net, and the specific flow of the defogging network iGLD-Net comprises the following steps: step A, in the iterative training process, the defogging network iGLD-Net trains the previous round Remote sensing defogging image obtained by multiple iterations Parallel feeding M-Net and G-Net, respectively outputting the first Haze distribution mask obtained by multiple iterations And the first Gamma correction mask obtained by multiple iterations Then fusing the pixels by the formula (1), and obtaining the first training by the learning frequency domain defogging module LFDM processing of the formula (2) Remote sensing defogging image obtained by multiple iterations Equation (1) and equation (2) are as follows: (1); (2); Wherein, the Representing the coordinates of the pixel and, And Respectively represent the first A haze distribution mask and a gamma correction mask are obtained through multiple iterations, Represents an intermediate fusion result of fusion by the formula (1), And Respectively represent the first Secondary and tertiary Remote sensing defogging images obtained by repeated iteration; Representing the operation of the learnable frequency domain defogging module LFDM; Step B, respectively controlling low-frequency and high-frequency components on the frequency domain side of the defogging network iGLD-Net through a mask constructed by a learnable frequency domain defogging module LFHM and a learnable frequency domain defogging module LFDM, wherein the learnable frequency domain defogging module LFHM is used for carrying out intermediate depth characteristic diagram on the M-Net on the frequency domain side Performing low-frequency enhancement to form a depth priori feature map 。
  4. 4. The remote sensing image-oriented multitasking collaborative iterative defogging method according to claim 3, wherein the cloud detection network CD-Net processing flow in step 3 comprises the following steps: step 31, extracting blue gradient priori feature map from remote sensing foggy image of training set Obtained by the formula (3), the formula (3) is as follows: (3); Wherein the method comprises the steps of And Respectively the coordinates of pixels in the image A gradient value of a blue channel and a gradient value of a red channel; step 32, the blue gradient prior feature map obtained in the step 31 is processed And depth prior feature map Splicing to obtain an initial mixed prior feature map And is obtained by the formula (4), wherein the formula (4) is as follows: (4); in the formula, For the operation of the connection of the channels, Representing a blue gradient prior feature map, Representing a priori feature map of the gradient to be blue And depth prior feature map Splicing along the channel dimension; step 33, remote sensing foggy image and initial mixed prior feature map of training set Commonly inputting the first-stage prior gating module MGFF in the cloud detection network CD-Net; Step 34 Main branch of the prior gating Module MGFF at the first stage passes equation (6) to the initial image feature map Performing pixel-depth convolution and nonlinear activation, and modulating element product by using group gate weight provided by prior branch to obtain first-stage modulated feature map of main branch output Equation (6) is shown below: (6); Wherein the method comprises the steps of Representing groupings along the channel dimension and, Representing a first-stage modulated feature map; Step 36A priori branch of the prior gating module MGFF at the first stage utilizes the initial hybrid prior feature map Sum group gating weights The first-stage mixed prior feature map after the splicing and convolution output updating is carried out through a formula (7) Equation (7) is shown below: (7); Step 37, the prior gating module MGFF at the second stage modulates the feature map with the first stage As the input feature map of its main branch, the first-stage mixed prior feature map And then the prior gating module MGFF at the third stage and the prior gating module MGFF at the fourth stage repeatedly execute the processing and modulating operation of the main branch and the prior branch, and the prior gating module MGFF at the fourth stage finally outputs a cloud probability map predicted by the cloud detection network CD-Net.
  5. 5. The remote sensing image-oriented multitasking collaborative iterative defogging method according to claim 4, wherein the cloud perceived reconstruction loss in step 4 is And weighting cloud layer areas of remote sensing defogging images output by a defogging network iGLD-Net by utilizing a cloud probability map predicted by a cloud detection network CD-Net, and processing by a formula (8), wherein the formula (8) is as follows: (8); Wherein the method comprises the steps of And Representing the height and width of the cloud detection map respectively, Representing the total number of iterations; Represents an L1 norm; Representing cloud detection network CD-Net at the first A cloud probability map of the wheel predictions, Represents the first Remote sensing defogging images obtained by the iteration, Representing a clear remote sensing defogging image.
  6. 6. The remote sensing image-oriented multitasking collaborative iterative defogging method according to claim 5, wherein step 4 frequency domain reciprocal consistency loss Mist amplitude scale factor output by the learner frequency domain mist module LFHM And fogging curvature parameter Can learn defogging scale factor that frequency domain defogging module LFDM outputted And defogging curvature parameter Constraint is carried out on the learning frequency domain mist adding module LFHM and the learning frequency domain mist removing module LFDM, the constraint is obtained through a formula (9), and the formula (9) is as follows: (9); Wherein the method comprises the steps of In order to preset the fixed balance coefficient, Representing the total frequency constant of 1, Representing the total number of iterations.
  7. 7. The method for performing remote sensing image-oriented multi-task collaborative iteration defogging according to claim 6, wherein the evaluation result process in the step 6 is as follows, peak signal-to-noise ratio PSNR, structural similarity SSIM and learning perception similarity LPIPS are selected as evaluation indexes of a synthetic image test set, and for a real image test set, a non-reference index natural image quality evaluator NIQE, a multi-level perception image quality evaluation MANIQA and a multi-scale image quality evaluation MUSIQ are adopted to measure defogging effects.

Description

Remote sensing image-oriented multitasking collaborative iteration defogging method Technical Field The application relates to the technical field of artificial intelligent algorithms and defogging processing of remote sensing images, in particular to a multi-task collaborative iteration defogging method for remote sensing images. Background In recent years, with rapid upgrade of applications such as high-resolution remote sensing and disaster monitoring, optical satellite images have become an irreplaceable data source for obtaining surface information. However, remote sensing imaging is often affected by high-altitude thin clouds and haze, so that the image contrast is reduced, and the accuracy and reliability of tasks such as subsequent change detection and target identification are seriously weakened. Different from the traditional low-altitude photo, the remote sensing image has the characteristics of non-uniform haze distribution, coupling with cloud layers and the like, so that the universal defogging algorithm is difficult to be directly applied. The existing method is mostly based on an atmospheric scattering model, and haze is assumed to be uniform and only exists on the near ground, but cloud layers and haze layer are mixed under the imaging height of ten meters, the spectral response is complex, and the problems of color distortion, ground surface detail loss and the like often occur after defogging. Therefore, the efficient remote sensing image defogging and cloud detection technology has become a key link for preprocessing remote sensing data. In order to break through the limitation of uniform haze assumption, in recent years, deep learning is introduced into the remote sensing defogging field, a convolutional neural network is utilized to learn the mapping relationship between a foggy image and a clear image, and physical constraint is tried to be embedded to improve the interpretability. However, the pure data driving method still faces a prominent bottleneck that most methods still mainly adopt single-task defogging, synchronous modeling on cloud layer and haze coupling degradation is lacked, so that region self-adaptive restoration is difficult to achieve when non-uniform thick fog and cloud layers coexist, thick fog region restoration is insufficient, clear region is over-enhanced frequently, and the traditional pure data driving scheme lacks cloud-fog priori interaction, yun Bianyuan is easy to misjudge as dense fog, spectral artifact is introduced, and texture detail loss is caused. Therefore, in the remote sensing defogging task, two kinds of degradation of cloud and fog are synchronously modeled, and a complementary mechanism of information of the cloud and the fog is constructed, so that the defogging quality and the reliability of downstream application are improved. The technical problems existing at present can be summarized as follows: 1. Cloud layers and fog layers have similar brightness and texture expression in remote sensing images, and often show space aliasing distribution, the existing single-task defogging method lacks cloud information constraint, is difficult to effectively distinguish cloud and fog, and is easy to inhibit cloud bodies and edges of the cloud bodies as haze by mistake, so that cloud structure distortion and detail loss are caused. 2. The inhomogeneous haze shows obvious spatial and spectral variation along with the topography and meteorological conditions in the high-altitude remote sensing scene, and the traditional uniform scattering assumption is difficult to establish. The existing deep learning defogging method can capture the variability to a certain extent, but still lacks prior guidance based on physical characteristics, and cannot effectively utilize spectrum prior such as a blue channel to distinguish cloud and fog differences, so that over-enhancement or under-enhancement occurs in a cloud and fog mixing area. In addition, the traditional multi-prior combination mode is usually realized through simple splicing or weighting, which is easy to cause interference of partial irrelevant prior to feature extraction, and the uncertainty of the restoration process is increased. 3. The existing remote sensing defogging method only considers unidirectional defogging or defogging process in frequency domain modeling, lacks complementary constraint on spectrum allocation of the two, and is easy to cause overlapping of low-frequency and high-frequency components or uneven energy, and problems of structure blurring, high-frequency artifacts and the like are caused. Particularly in complex non-uniform haze areas, when the frequency domain mask repeatedly responds in an excessively high frequency band, the defogging result is inconsistent in brightness and color, so that the overall restoration quality and stability are reduced. Disclosure of Invention In order to overcome the technical problems, the application provides a remote sensing image-oriented multi