CN-115760666-B - Remote sensing image fusion method combining ratio transformation and distribution transformation
Abstract
The invention discloses a remote sensing image fusion method combining ratio conversion and distribution conversion, which comprises the steps of carrying out mean value filtering treatment on a full-color image and an up-sampling multispectral image to obtain a high-frequency component of the full-color image and a high-frequency component of the up-sampling multispectral image, obtaining high-frequency details missing from the multispectral image based on the high-frequency components, recording the high-frequency details as first high-frequency details, carrying out standard normalization treatment on the first high-frequency details to obtain second high-frequency details, calculating the mean value and standard deviation of each pixel in each channel in the up-sampling multispectral image, splicing the up-sampling multispectral image and the first high-frequency details, inputting the spliced up-sampling multispectral image and the first high-frequency details into a convolution network to generate two affine transformation parameters, injecting the obtained mean value and standard deviation into the second high-frequency details which are distributed in the same way as the up-sampling multispectral image, and obtaining a final fusion image based on the up-sampling multispectral image. The method solves the problems of spectrum distortion and detail distortion in the existing remote sensing image fusion algorithm.
Inventors
- ZHANG LIMING
- WEI XINGXING
- Yuan Maoxuan
- LI BO
Assignees
- 北京航空航天大学
Dates
- Publication Date
- 20260505
- Application Date
- 20221118
Claims (7)
- 1. A remote sensing image fusion method combining ratio conversion and distribution conversion is characterized by comprising the following steps: S1, respectively carrying out mean value filtering treatment on a full-color image and an up-sampling multispectral image to obtain corresponding high-frequency components of the full-color image and the up-sampling multispectral image; s2, obtaining missing high-frequency details of the multispectral image based on the panchromatic image, the up-sampling multispectral image, the panchromatic image high-frequency components and the up-sampling multispectral image high-frequency components, and recording the missing high-frequency details as first high-frequency details; S3, carrying out standard normalization processing on the first high-frequency details to obtain standard normalized high-frequency details, and recording the standard normalized high-frequency details as second high-frequency details; S4, splicing the up-sampling multispectral image and the first high-frequency details, and inputting the spliced multispectral image and the first high-frequency details into a convolution network to generate a first affine transformation parameter and a second affine transformation parameter; S5, injecting the mean value and the standard deviation of each pixel in each channel in the up-sampling multispectral image into the second high-frequency detail based on the first affine transformation parameters and the second affine transformation parameters, and generating the high-frequency detail which is distributed with the up-sampling multispectral image in the same way and is recorded as a third high-frequency detail; S6, based on the third high-frequency detail, combining the up-sampling multispectral image to obtain a final fusion image; The step S2 specifically comprises the following steps: S21, splicing the high-frequency component of the up-sampling multispectral image and the high-frequency component of the panchromatic image, and simultaneously inputting the spliced high-frequency component and the high-frequency component of the panchromatic image into a convolution network to generate a high-frequency component of the panchromatic image with low resolution; S22, splicing the up-sampling multispectral image and the panchromatic image and inputting the images into a network at the same time to generate a low-resolution panchromatic image; s23, adding the high-frequency component of the low-resolution panchromatic image into the low-resolution panchromatic image to obtain a corrected low-resolution panchromatic image; S24, carrying out ratio conversion on the full-color image and the low-resolution full-color image to obtain missing details of the multispectral image, and recording the missing details as first high-frequency details.
- 2. The remote sensing image fusion method combining ratio transformation and distribution transformation according to claim 1, wherein the S1 specifically comprises: S11, acquiring a full-color image and a multispectral image; s12, up-sampling the multispectral image to obtain an up-sampled multispectral image with the same scale as the full-color image; S13, respectively carrying out mean filtering convolution calculation on the panchromatic image and the up-sampling multispectral image to obtain corresponding panchromatic image low-frequency components and up-sampling multispectral image low-frequency components; S14, respectively using the full-color image and the up-sampling multispectral image to make differences with the corresponding low-frequency components to obtain corresponding high-frequency components of the full-color image and the up-sampling multispectral image.
- 3. The remote sensing image fusion method combining ratio transformation and distribution transformation according to claim 1, wherein in S3, the standard normalization processing is performed on the first high-frequency details, and the method specifically comprises: (1) Calculating the average value of each pixel in the first high-frequency detail, wherein the formula is as follows: Wherein mu detail represents the average value of each pixel in the first high-frequency detail, P detail represents the first high-frequency detail, P detail y,x represents the value of each pixel in the first high-frequency detail, H represents the image height, W represents the width, x represents the abscissa of the pixel, and y represents the ordinate of the pixel; (2) Calculating the standard deviation of each pixel in the first high-frequency detail, wherein the formula is as follows: Wherein σ detail represents the standard deviation of each pixel in the first high-frequency detail; (3) Based on the mean value and standard deviation of each pixel in the first high-frequency detail, carrying out standard normalization processing on the first high-frequency detail to obtain a high-frequency detail after standard normalization, and marking the high-frequency detail as a second high-frequency detail, wherein a calculation formula is expressed as follows: Wherein, the Representing the normalized high frequency detail, i.e., the second high frequency detail.
- 4. The remote sensing image fusion method combining ratio transformation and distribution transformation according to claim 1, wherein in S3, the mean value and standard deviation of each pixel in each channel in the up-sampled multispectral image are calculated, which specifically includes: (1) Calculating the average value of each pixel in each channel in the up-sampling multispectral image, wherein the formula is as follows: Wherein, the Representing the average value of each pixel in each channel in the up-sampled multispectral image, wherein H represents the image height, W represents the width, x represents the abscissa of the pixel point, and y represents the ordinate of the pixel point; pixel values representing the (x, y) position of the c-channel; (2) Calculating the standard deviation of each pixel in each channel in the up-sampling multispectral image according to the average value of each pixel in each channel in the up-sampling multispectral image, wherein the formula is as follows:
- 5. The method for remote sensing image fusion combining ratio transformation and distribution transformation according to claim 1, wherein in S4, the first affine transformation parameters are used to adjust the average value of each pixel in each channel in the up-sampled multispectral image; The second affine transformation parameters are used to adjust the standard deviation of each pixel in the respective channels in the up-sampled multispectral image.
- 6. A remote sensing image fusion method combining ratio conversion and distribution conversion as claimed in claim 3, wherein said third high frequency details are expressed as: Wherein, the Third high frequency detail representing the c-th channel; Representing the standard deviation of pixels of a c-th channel in the upsampled multispectral image; Representing the pixel mean of the c-th channel in the up-sampled multispectral image, β representing the first affine transformation parameters, γ representing the second affine transformation parameters.
- 7. A binding ratio according to claim 1 a remote sensing image fusion method of transformation and distribution transformation, wherein the final fused image is represented as: Wherein, the A fused image representing the c-th channel; third high frequency detail representing the c-th channel; representing an up-sampled multispectral image of the c-th channel.
Description
Remote sensing image fusion method combining ratio transformation and distribution transformation Technical Field The invention belongs to the technical field of digital image processing, and particularly relates to a remote sensing image fusion method combining ratio conversion and distribution conversion. Background With the vigorous development of aerospace technology in China, more satellites carrying various different sensors launch and lift off, and remote sensing images play an increasingly important role in earth observation. Limited by the loading capacity of the satellite, the spatial resolution of a panchromatic image is higher but the spectral resolution is lower (typically only one band) and the spectral resolution of a multispectral image is lower but the spectral resolution is higher (typically more than 4 bands) for panchromatic images and multispectral images acquired by the same satellite. In the implementation application of the remote sensing image, people do not use the multispectral image and the full-color image at the same time. A fusion strategy is generally adopted to fuse the two images to generate a multispectral image with high spatial resolution. The high spatial resolution multispectral image is characterized in that the multispectral image has the spectral resolution of the multispectral image and the spatial resolution of the full-color image, so that fine ground objects can be clearly identified, and the multispectral image is more beneficial to environment detection and disaster prevention. The remote sensing image fusion technology is an important research direction in the multi-source remote sensing data processing technology, relates to multi-directional edge intersection subjects such as sensor technology, signal processing, computer application, image processing and the like, is widely applied to the fields such as urban planning, geographic detection, vegetation agricultural evaluation, military national defense, environmental pollution and the like, and has important practical significance for the construction and development of the remote sensing industry in China. The existing remote sensing image fusion algorithms are various, and although the fusion requirements can be met to a certain extent, the existing remote sensing image fusion algorithms have respective defects, wherein the main problem is that the fused images still have distortion problems. The image distortion is divided into spectral distortion and detail distortion, the spectral distortion exists in the method based on component substitution and the method based on multi-resolution analysis, and the detail distortion exists in the method based on deep learning. The presence of distortion makes the fused image not usable as directly as the original image, and the deviation problem of the fused image must be considered, so that the obtained application value is limited. Therefore, how to solve the problems of spectrum distortion and detail distortion in the existing remote sensing image fusion algorithm becomes a key problem of current research. Disclosure of Invention In view of the above problems, the invention provides a remote sensing image fusion method combining ratio conversion and distribution conversion, which at least solves the above part of technical problems, the method generates a low-resolution panchromatic image through a depth neural network, performs ratio operation with the panchromatic image to generate high-frequency details missing from the multispectral image, thereby solving the problem of detail distortion caused by overlarge gray level difference between the low-resolution image and the panchromatic image, converts the high-frequency details into spectral gain factors conforming to the distribution of the multispectral image (namely, the high-frequency details distributed with the upsampled multispectral image) through a gain algorithm, and then injects the spectral gain factors into the multispectral image to generate a fusion image, thereby greatly preserving the spectral information of the multispectral image, and finally obtaining a better high-resolution multispectral fusion image. The embodiment of the invention provides a remote sensing image fusion method combining ratio conversion and distribution conversion, which comprises the following steps: 1. A remote sensing image fusion method combining ratio conversion and distribution conversion is characterized by comprising the following steps: S1, respectively carrying out mean value filtering treatment on a full-color image and an up-sampling multispectral image to obtain corresponding high-frequency components of the full-color image and the up-sampling multispectral image; s2, obtaining missing high-frequency details of the multispectral image based on the panchromatic image, the up-sampling multispectral image, the panchromatic image high-frequency components and the up-sampling multispectral image high-frequency comp