Search

CN-121999020-A - Multi-mode remote sensing image registration method and system integrating mutual information and intensity difference

CN121999020ACN 121999020 ACN121999020 ACN 121999020ACN-121999020-A

Abstract

The invention is suitable for the technical field of image registration, and provides a multi-mode remote sensing image registration method and system for fusing mutual information and intensity difference, wherein the method comprises the following steps of obtaining a multi-mode remote sensing image to be registered, and constructing an image scale space based on mutual information fusion intensity difference weighting; the method comprises the steps of performing feature point detection and positioning in an image scale space by using intensity difference and multi-scale structure tensor response as improved consistency measurement, calculating gradient position direction histograms in the neighborhood of the feature points to generate feature descriptors, performing bidirectional matching on multi-mode remote sensing images to be registered based on the feature descriptors to obtain initial matching pairs, performing outlier rejection on the initial matching pairs, and estimating a space transformation model among the images by combining geometric constraints to complete image registration. The method provided by the invention is superior to the existing various mainstream methods in the aspects of feature point extraction quantity, correct matching rate and registration accuracy.

Inventors

  • XIAO GENFU
  • LI YUEHUI
  • LIU HUAN
  • OUYANG CHUNJUAN
  • CHEN CHENG
  • LAN XINGXING

Assignees

  • 井冈山大学

Dates

Publication Date
20260508
Application Date
20260122

Claims (10)

  1. 1. A multi-mode remote sensing image registration method integrating mutual information and intensity difference is characterized by comprising the following steps: acquiring a multi-mode remote sensing image to be registered, and constructing an image scale space based on mutual information fusion intensity difference weighting; In the image scale space, detecting and positioning characteristic points by using the intensity difference and the multi-scale structure tensor response as improved consistency measurement; Calculating a gradient position direction histogram in the neighborhood of the feature points to generate feature descriptors; based on the feature descriptors, bi-directional matching is carried out on the multi-mode remote sensing images to be registered, and an initial matching pair is obtained; And carrying out outlier rejection on the initial matching pair, and estimating a space transformation model between the images by combining geometric constraints to complete image registration.
  2. 2. The method for registering multi-modal remote sensing images by fusing mutual information and intensity differences according to claim 1, wherein the step of constructing an image scale space weighted based on the mutual information fused intensity differences specifically comprises: Input image Performing initial Gaussian smoothing by formula (1) to obtain a first layer of pyramid : ; Wherein, the Representing standard deviation as Is a gaussian convolution kernel of (c), Representing a convolution operation; for the k-th layer of the pyramid, Its construction is based on the previous layer image Processing the pixel p by a filter as shown in formula (2), and finally calculating the filtering of the pixel p by formula (3) and outputting the filtered pixel p as all pixels in the neighborhood Is a weighted average of: ; ; wherein the number of layers Is a related parameter of the statistical scale, Representing a multi-scale mutual information joint strength difference weighting filter, Is a spatial scale parameter; Is a statistical scale parameter related to the number k of pyramid layers, which is defined as shown in equation (4) and equation (5): ; ; Wherein, the An interlayer scaling factor for a scale space, an initial value of K is the number of layers for a local window centered on pixel p Any pixel within Its final weight Is composed of three parts, as shown in formula (6): ; Wherein, the Is a spatial proximity weight; Is the intensity similarity weight; Weights are adjusted for mutual information.
  3. 3. The method for registration of multi-modal remote sensing images with fusion of mutual information and intensity differences as set forth in claim 2, wherein the spatial proximity weights are Calculated from equation (7): ; Wherein, the Is a neighborhood pixel; Is the center pixel.
  4. 4. The method for multi-modal remote sensing image registration with fusion of mutual information and intensity differences as set forth in claim 3, wherein intensity similarity weights Can be calculated from equation (8): ; Wherein, the Is a window The variance of the intensity within the range, Is a fuzzy degree adjusting parameter related to the pyramid layer number k, and the value of the first layer is The values of the other layers are determined by the formula (9), As a regulatory factor; 。
  5. 5. the method for multi-modal remote sensing image registration with fusion of mutual information and intensity differences as set forth in claim 4, wherein the mutual information adjusts weights The method of calculating (1) includes first calculating center pixel With neighborhood window Mutual information of (a) Then, calculating a normalized intensity difference d by a formula (10), and then dynamically adjusting a weight value by a piecewise function strategy, and calculating by a formula (11): ; ; ; ; Wherein, the In order to smooth out the adjustment factor, Is a mutual information formula, as shown in formula (12), in which Is a smoothness adjusting parameter related to the number k of pyramid layers, and the value of the first layer is The values of the other layers are determined by equation (13), As a regulatory factor.
  6. 6. The method for registration of multi-modal remote sensing images with fusion of mutual information and intensity differences according to claim 5, wherein the step of feature point detection and localization in the image scale space using intensity differences in combination with multi-scale structure tensor responses as improved consistency metrics, specifically comprises: For each layer of image in the image scale space The consistency measure of the multi-scale structure tensor response of the intensity difference is calculated by first calculating the second order gradient of the image by low pass filtering and Sobel operator, and by equation (14): ; on the basis, constructing a structure tensor H adaptive to the current scale as described by a formula (15); ; Wherein, the Is the scale parameter of the current layer; respectively representing the horizontal coordinate and the vertical coordinate of the calculated image; Introduction of intensity differences Calculated from equation (16): ; the i_stm response is then obtained by multiplying the minimum eigenvalue of the structure tensor by the intensity difference consistency metric, and can be calculated from equation (17): ; Wherein, the ; Values representing the (1, 1) position in the matrix H, Values representing the (1, 2) position in the matrix H, Values representing the (2, 1) position in the matrix H, A value representing the (2, 2) position in the matrix H; Then, local extreme points in each scale layer are detected in the scale space formed by the I_STM response, and if the response value of one pixel point is higher than a set threshold value ds and is extreme in the 3X 3 neighborhood, the point is marked as a candidate feature point.
  7. 7. The method for registering multi-modal remote sensing images with fused mutual information and intensity differences as set forth in claim 1, wherein said feature descriptors are GLOH descriptors.
  8. 8. The method for registering multi-modal remote sensing images with fused mutual information and intensity difference according to claim 1, wherein the step of performing bidirectional matching on the multi-modal remote sensing images to be registered based on the feature descriptors to obtain initial matching pairs specifically comprises the following steps: And according to cosine distances among feature descriptors corresponding to the multi-mode remote sensing images to be registered, performing bidirectional nearest neighbor search to obtain an initial matching pair.
  9. 9. The multi-mode remote sensing image registration method integrating mutual information and intensity difference according to claim 1, wherein the outlier rejection method is that outlier rejection is carried out by adopting a random sampling consistency algorithm.
  10. 10. A multi-modal remote sensing image registration system for fusing mutual information and intensity differences, for implementing the multi-modal remote sensing image registration method for fusing mutual information and intensity differences as set forth in any one of claims 1 to 9, comprising: The scale space construction module is used for acquiring the multi-mode remote sensing images to be registered and constructing an image scale space based on mutual information fusion intensity difference weighting; The characteristic point extraction module is used for detecting and positioning characteristic points by using the intensity difference and the multi-scale structure tensor response as improved consistency measurement in the image scale space; The descriptor generation module is used for calculating a gradient position direction histogram in the neighborhood of the feature points to generate feature descriptors; the bidirectional matching module is used for carrying out bidirectional matching on the multi-mode remote sensing images to be registered based on the feature descriptors to obtain initial matching pairs; And the image registration module is used for carrying out outlier rejection on the initial matching pair and estimating a space transformation model between images by combining geometric constraints to finish image registration.

Description

Multi-mode remote sensing image registration method and system integrating mutual information and intensity difference Technical Field The invention belongs to the technical field of image registration, and particularly relates to a multi-mode remote sensing image registration method and system integrating mutual information and intensity difference. Background The multi-mode remote sensing image registration is a key preprocessing step in remote sensing information fusion and analysis, and aims to geometrically align remote sensing images from different sensors, different time phases or different imaging conditions so as to support advanced applications such as change detection, target identification, data fusion and the like. However, due to differences in imaging mechanisms, there are often significant nonlinear radiation differences, inconsistent noise patterns, ambiguous structural expressions, and the like between multi-modal images. The traditional method based on gray statistics or single characteristics is difficult to maintain stable performance, and effectively utilizes multi-mode remote sensing data is severely restricted. The current remote sensing image registration method mainly can be divided into two major categories, namely a gray information-based method and a feature-based method. The gray information-based method (such as mutual information method and phase correlation method) directly utilizes pixel intensity statistics to match, has certain robustness to linear radiation change between modes, but is easy to lose effectiveness under complex nonlinear difference, and has large calculated amount and sensitivity to geometric deformation. Feature-based methods (such as SIFT, SURF and variants thereof) have better adaptability to geometric changes by extracting local features of points, lines, planes and the like in images, but feature detection and descriptor design are mostly aimed at images with single mode or strong radiation consistency, matching failure is often caused by unstable features and low repeatability in multi-mode scenes, and particularly poor performance is caused in images with inconsistent structural response or serious noise. Therefore, a registration method capable of inhibiting modal specific noise and enhancing repeatability of cross-modal structural features is needed at present so as to improve accuracy, robustness and generalization capability of multi-modal remote sensing image registration. Disclosure of Invention The invention aims to provide a multi-mode remote sensing image registration method integrating mutual information and intensity difference, and aims to solve the technical problems. The invention is realized in such a way that a multi-mode remote sensing image registration method integrating mutual information and intensity difference is characterized by comprising the following steps: acquiring a multi-mode remote sensing image to be registered, and constructing an image scale space based on mutual information fusion intensity difference weighting; In the image scale space, detecting and positioning characteristic points by using the intensity difference and the multi-scale structure tensor response as improved consistency measurement; Calculating a gradient position direction histogram in the neighborhood of the feature points to generate feature descriptors; based on the feature descriptors, bi-directional matching is carried out on the multi-mode remote sensing images to be registered, and an initial matching pair is obtained; And carrying out outlier rejection on the initial matching pair, and estimating a space transformation model between the images by combining geometric constraints to complete image registration. Further, the step of constructing an image scale space weighted based on mutual information fusion intensity differences specifically includes: Input image Performing initial Gaussian smoothing by formula (1) to obtain a first layer of pyramid: ; Wherein, the Representing standard deviation asIs a gaussian convolution kernel of (c),Representing a convolution operation; for the k-th layer of the pyramid, Its construction is based on the previous layer imageProcessing the pixel p by a filter as shown in formula (2), and finally calculating the filtering of the pixel p by formula (3) and outputting the filtered pixel p as all pixels in the neighborhoodIs a weighted average of:; ; wherein the number of layers Is a related parameter of the statistical scale,Representing a multi-scale mutual information joint strength difference weighting filter,Is a spatial scale parameter; Is a statistical scale parameter related to the number k of pyramid layers, which is defined as shown in equation (4) and equation (5): ; ; Wherein, the An interlayer scaling factor for a scale space, an initial value ofK is the number of layers for a local window centered on pixel pAny pixel withinIts final weightIs composed of three parts, as shown in formula (6):;