Search

CN-122023120-A - Image stitching method and system based on double-layer stitching

CN122023120ACN 122023120 ACN122023120 ACN 122023120ACN-122023120-A

Abstract

The invention discloses an image stitching method and system based on double-layer stitching, and belongs to the technical field of image processing. The method comprises the steps of 1, estimating the overlapping degree of images by using IMU and GPS data of unmanned aerial vehicle aerial photography, removing redundant frames, clustering an image set into a plurality of clusters, 2, performing first-layer splicing, determining the splicing sequence in the clusters by taking the frames with the most stable postures as references according to the overlapping degree and the distance, sequentially registering and fusing to obtain a splicing result in the clusters, 3, performing second-layer splicing, determining the splicing sequence by taking the central postures of each cluster as references, registering, and combining the non-overlapping area and the grid local transformation processing overlapping area by similar transformation processing to finish final fusion. The invention reduces redundant information interference through the auxiliary preprocessing of the IMU and the GPS, adopts a double-layer splicing strategy, relieves error accumulation by splicing frame by frame in the clusters, combines global and local transformation among the clusters, and improves splicing stability.

Inventors

  • BU YANLING
  • Han Saibing

Assignees

  • 南京航空航天大学

Dates

Publication Date
20260512
Application Date
20260408

Claims (10)

  1. 1. The image stitching method based on double-layer stitching is characterized by comprising the following steps of: Step 1, determining the overlapping degree between images according to IMU data and GPS data provided during unmanned aerial vehicle aerial photography, removing redundant frames by using the overlapping degree, clustering an image set after removing the redundant frames, and dividing the image set into a plurality of clusters; Step 2, performing first-layer stitching in the clusters divided in the step 1, firstly calculating the attitude stability according to IMU data of each frame of image in the clusters, selecting a frame of image with the most stable attitude as a cluster center, taking the frame of image as a reference frame, determining stitching sequence based on overlapping degree and distance between two image projection centers, performing image registration, eliminating frames with failed registration and re-clustering, and finally stitching and fusing the images to obtain a first-layer stitching result; And 3, performing second-layer stitching according to the first-layer stitching result obtained in the step 2, firstly, selecting the most stable posture from the postures of the cluster centers of each cluster, selecting the first-layer stitching result corresponding to the most stable posture as a reference, determining stitching sequence, registering, calculating a global transformation processing non-overlapping area based on similar transformation, processing the overlapping area by combining with local transformation of grid division, and finally obtaining a second-layer stitching result through image fusion to finish image stitching.
  2. 2. The image stitching method based on double-layer stitching according to claim 1, wherein step 1 specifically comprises the following steps: step 11, calculating the ground sampling interval; Step 12, calculating the overlapping degree between images based on the ground sampling interval; Step 13, when the overlapping degree between two images in the image set exceeds a set threshold, the images are planned to be redundant frames, and if all projection areas planned to be the redundant frames are contained in the rest frames, the images are finally judged to be the redundant frames, and deletion is carried out; And 14, dividing the images into a plurality of clusters by using a K-Means clustering algorithm according to the overlapping relation between the images based on the image set after the redundant frames are deleted.
  3. 3. The image stitching method based on double-layer stitching according to claim 2, wherein: in step 11, the longitude and latitude center coordinates of two adjacent images are obtained by using GPS data and converted into plane coordinates in a local plane coordinate system And Calculating the center distance of the two images on the ground through the plane coordinate difference of the two images: ; then extracting characteristic points from the two corresponding images and matching, and determining an affine transformation matrix between the two images through matching point pairs: ; wherein the first two rows and the first two columns 、 、 、 Is commonly responsible for the scaling and rotation transformation, For the translation component, thereby obtaining the translation distance of the two images in pixel space: ; the distance reflects the average shift amount between adjacent images in pixel units, and the distance is calculated by selecting multiple pairs of adjacent images And Obtaining a series of Data points; And then according to the formula Calculating to obtain a ground sampling interval, wherein GSD is the ground sampling interval; In step 12, the image size is mapped to the ground coordinate system using the ground sample spacing and a rotation matrix is introduced Performing rotation correction on the image direction to obtain a projection polygon; Rotation matrix The calculation mode of (2) is as follows: First, the yaw angle is calculated by IMU data Pitch angle And roll angle The rotation matrices are expressed as: ; ; ; and combining the three to obtain a rotation matrix: ; for any two images And Degree of overlap of The calculation formula of (2) is as follows: ; wherein: Is an image The resulting projected polygon is a projected polygon of the shape, Is an image The resulting projected polygon is a projected polygon of the shape, The function is referred to as the projected area, Refers to the intersection set of two or more of the two or more, Refers to And The overlapping area of the polygons is projected, Refers to And Projecting the area of the overlapping region of the polygon; in step 13, an overlap threshold is set And an excessive overlap threshold When (when) When two images are considered to have effective overlapping areas, the two images are regarded as splicing candidate pairs, and each frame of image is provided with the effective overlapping areas Constructing a coverage mask, using the coverage mask to project polygons on the ground Generating a corresponding binary overlay mask Then, calculating the global coverage area, and representing the joint coverage area of all the images as For any one of the following conditions Image pair of (a) If the image is to be deleted Removal to be verified The coverage area of all the images remained after the process is still equal to the original coverage area, if the condition is satisfied, the images are removed ; In step 14, the overlapping relation of the images is converted into an adjacent structure between the images, the adjacent structure describes an undirected image structure, the nodes correspond to the image frames, the edges represent the spatial overlapping relation between the two frames for registration, and further image clusters with consistency in space are formed, and the number of the images is set as Clustering number measuring device Using the centre coordinates of the image K-Means clustering is carried out, and a plurality of image clusters { with consistent structures are obtained through clustering 。
  4. 4. The image stitching method based on double-layer stitching according to claim 3, wherein step 2 specifically comprises the following steps: Step 21, calculating the attitude stability of the images according to the pitch angle and the roll angle of the camera when the images are shot, selecting the image with the most stable attitude in each cluster as a cluster center, and taking the cluster center as a reference frame; Step 22, calculating the splicing sequence of the images in each cluster according to the distance and the overlapping degree between the two image projection centers; in each cluster, according to the splicing sequence, registering adjacent images, removing error points by means of random sampling consistency, and then calculating a transformation matrix of each image transformed to a reference frame; the frame which encounters registration failure needs to be removed from the cluster and added into other clusters for registration until successful or all clusters are not matched; And step 23, performing image fusion to obtain a first layer splicing result.
  5. 5. The image stitching method based on double-layer stitching according to claim 4, wherein: In step 21, each frame of image is defined The index of the attitude stability of (2) is : ; Wherein, the And (3) with Respectively representing each frame of image Selecting an image corresponding to the minimum value of the gesture stability index as the most stable gesture image in each cluster; in step 22, the stitching order of the images in each cluster is determined by the following steps: Firstly, taking a reference frame as a first frame, searching an image with the highest overlapping degree with the reference frame as a second frame through an initial sequence, taking the image with the second highest overlapping degree as a third frame, and the like, when the image is not overlapped with the reference frame, adopting a distance to screen, and firstly splicing the image closest to the reference frame, thereby finally determining the splicing sequence; The geometric registration between adjacent images is completed by selecting a SURF method, a pairwise registration transformation matrix is obtained, and then the pairwise registration transformation matrix is transformed into homography of a reference frame, and the method comprises the following specific steps: In the form of images Obtaining images by registration for reference frames And an image Image and method for producing the same And an image Is respectively denoted as the transform matrix of (a) And Image is formed Conversion to an image The subsequent image is recorded as Then Corresponding, image Conversion to an image Subsequent image Represented as At this time, an image is calculated Conversion to an image Subsequent image Then it is expressed as: ; Splicing other images in the cluster to the reference frame according to the formula; if the image with the failure registration is encountered in the registration process, namely a failure frame, the failure frame is needed to be removed from the cluster, the image which has the highest overlapping degree with the failure frame and belongs to other clusters is searched for registration, if the image is successful, the corresponding clusters are added for splicing, and if the image still fails, the failure frame is removed directly.
  6. 6. The image stitching method based on double-layer stitching according to claim 5, wherein step 3 specifically comprises the following steps: Step 31, taking the attitude stability index of the cluster center of each cluster in the first layer splicing result as the stability index of the splicing result, and then selecting the first layer splicing result with the most stable attitude as the center of the second layer splicing; Step 32, registering every two according to the sequence of the second layer splicing, further determining an overlapping area and a non-overlapping area, applying global similarity transformation in the non-overlapping area, dividing grids in the overlapping area, and calculating a local transformation matrix corresponding to each grid; and step 33, carrying out weighted fusion on the images according to the boundary of the pixel point from the overlapping region to obtain a second layer of splicing result, and completing image splicing.
  7. 7. The image stitching method based on double-layer stitching according to claim 6, wherein: In step 31, clustering is performed The set of images contained being Image of Plane coordinates of (2) are Averaging the central coordinates of all images in the cluster to obtain the central position of the cluster In addition, the gesture of a stable frame in each cluster is taken as the gesture of the cluster, the most stable cluster in all clusters is selected as a reference image for inter-cluster stitching, and the Euclidean distance between the cluster and the space center coordinates of other clusters is taken as the basis of stitching sequence to obtain a cluster sequence to be stitched.
  8. 8. The image stitching method based on double-layer stitching according to claim 7, wherein: In step 32, the inter-cluster stitching of the cluster sequence is to stitch the first two images according to the stitching order, then stitch the obtained result and the third image, and so on, to obtain the final stitching result; firstly, registering two images, and calculating a global similarity transformation matrix of the images according to registration results so as to determine overlapping and non-overlapping areas; in the non-overlapping region, for any two adjacent intra-cluster splice results And Assuming that it obtains For reliable matching points, respectively corresponding to the first two clusters Two-dimensional coordinates of the matching points, calculating coordinate mean values of the two groups of characteristic points respectively, and performing decentralization treatment on the two groups of characteristic points around the respective centroids, wherein the decentralized points are used for estimation of subsequent similarity transformation, and a secondary similarity transformation is constructed on the basis To the point of Cross covariance matrix of (c): ; wherein: for the cross covariance matrix, K represents the logarithm of the matching point pairs, Representing intra-cluster splice results A set of feature point points that are decentered about their centroid, Representing intra-cluster splice results A characteristic point set which is subjected to decentralization processing around a centroid thereof, wherein the centroid refers to an average coordinate point of the group of points; for cross covariance matrix Singular value decomposition is carried out to obtain a slave To the point of Is an optimal rotation matrix of (a) Translation vector And further solving for the scaling factor Then the global similarity transformation matrix: ; the matrix provides the slave Image coordinate system to cluster of (2) Is a direct mapping of the pixel coordinate system of (2); In the process of obtaining the slave To the cluster Global similarity transformation matrix of (a) Thereafter, will Mapping to by the transformation In the coordinate system of (1), the preliminary alignment of two cluster-level splicing results on the global structure is realized, and the arrangement is that The transformed image is In order to facilitate the description of the subsequent calculation process, the following will be made Is marked as Then: ; In the overlapping region, the image is first divided into Rectangular grid cells, then minimizing the objective function Solving a homography matrix corresponding to each grid: ; Wherein, the The weight of each feature point is represented, Adding constraint for the number of feature point pairs To ensure that Meaning of (2) will After converting back to matrix form, obtaining local transformation matrix of each grid 。
  9. 9. The image stitching method according to claim 8, wherein in step 23 and step 33, the process of image fusion of overlapping areas is as follows: two images are respectively set as And Image of Obtaining an image after transformation After simultaneous calculation of the transformation And Is expressed as And performing weighted fusion of pixels in the overlapping region, for the points of the overlapping region At the point of From the point of And The pixel values of (2) are respectively recorded as And (3) with Record(s) And (3) with Respectively as dots To the point of And Shortest distance of overlapping region boundary, point Fused pixel values at The method comprises the following steps: ; for pixels of non-overlapping areas, their pixel values are copied directly from the corresponding image.
  10. 10. An image stitching system based on double-layer stitching is characterized by comprising the following modules: The preprocessing module is used for determining the overlapping degree between images according to IMU data and GPS data provided during unmanned aerial vehicle aerial photography, removing redundant frames by utilizing the overlapping degree, clustering the image sets after the redundant frames are removed, and dividing the image sets into a plurality of clusters; The first layer stitching module is used for calculating the gesture stability according to the IMU data of each frame of image in the cluster, selecting a frame with the most stable gesture as the center of the cluster, taking the frame as a reference frame, determining the stitching sequence based on the overlapping degree and the distance between the two image projection centers, registering the images, eliminating frames with failed registration and resetting the clusters, and finally stitching the images in sequence and fusing the images to obtain a first layer stitching result; the second layer splicing module is used for taking the posture of the cluster center of each cluster as the posture stability of each splicing result, selecting the splicing result with the most stable posture as a reference, determining the splicing sequence, registering, calculating a global transformation processing non-overlapping area based on similar transformation, combining the local transformation processing overlapping area of grid division, and obtaining the final splicing result through image fusion; and the image fusion module is used for calculating an overlapping area when each layer of splicing is performed by using the transformation matrix in the first layer of splicing and the second layer of splicing, calculating the minimum bounding rectangle of the overlapping area, and respectively calculating the shortest distance from each pixel point of the overlapping area to the bounding rectangle of the two images and performing weighted fusion.

Description

Image stitching method and system based on double-layer stitching Technical Field The invention relates to the technical field of image processing, in particular to an image stitching method and system based on double-layer stitching. Background In an unmanned aerial vehicle image splicing scene, due to task planning or environmental interference, unmanned aerial vehicles often cannot keep strict continuous shooting, and the problem also exists in simultaneous shooting of multiple unmanned aerial vehicles. These factors can lead to a lack of stable front-to-back frame associations between images, making it impossible to build an accurate alignment model directly dependent on inter-frame constraints. Meanwhile, discontinuous pose changes can also greatly increase the difficulty of initial alignment and subsequent registration. Existing methods generally rely on continuous view angles or stable paths, but often fail to achieve reliable geometric consistency in the face of multi-source shots and large pose shifts. Secondly, in the aerial photographing process, the same area is often repeatedly covered by a plurality of frames, a plurality of images are easy to present inconsistent local geometric structures in the overlapped area, and the continuous superposition of a plurality of images can cause a blurring phenomenon. Finally, local errors and viewing angle drift accumulate and are transferred to the global, resulting in tensile deformation of the overall structure. Meanwhile, if the images come from different heights, different shooting moments and even different unmanned aerial vehicle platforms, the color difference of the images can further reduce the fusion quality, and obvious splicing marks can be caused if the images are not processed. Disclosure of Invention Aiming at the technical problems, the invention provides an image stitching method and system based on double-layer stitching. The technical scheme adopted by the invention is as follows: An image stitching method based on double-layer stitching comprises the following steps: Step 1, determining the overlapping degree between images according to IMU data and GPS data provided during unmanned aerial vehicle aerial photography, removing redundant frames by using the overlapping degree, clustering an image set after removing the redundant frames, and dividing the image set into a plurality of clusters; Step 2, performing first-layer stitching in the clusters divided in the step 1, firstly calculating the attitude stability according to IMU data of each frame of image in the clusters, selecting a frame of image with the most stable attitude as a cluster center, taking the frame of image as a reference frame, determining stitching sequence based on overlapping degree and distance between two image projection centers, performing image registration, eliminating frames with failed registration and re-clustering, and finally stitching and fusing the images to obtain a first-layer stitching result; And 3, performing second-layer stitching according to the first-layer stitching result obtained in the step 2, firstly, selecting the most stable posture from the postures of the cluster centers of each cluster, selecting the first-layer stitching result corresponding to the most stable posture as a reference, determining stitching sequence, registering, calculating a global transformation processing non-overlapping area based on similar transformation, processing the overlapping area by combining with local transformation of grid division, and finally obtaining a second-layer stitching result through image fusion to finish image stitching. In addition, on the basis of the image stitching method based on double-layer stitching, the invention also provides an image stitching system based on double-layer stitching, which corresponds to the image stitching system and comprises the following modules: The preprocessing module is used for determining the overlapping degree between images according to IMU data and GPS data provided during unmanned aerial vehicle aerial photography, removing redundant frames by utilizing the overlapping degree, clustering the image sets after the redundant frames are removed, and dividing the image sets into a plurality of clusters; The first layer stitching module is used for calculating the gesture stability according to the IMU data of each frame of image in the cluster, selecting a frame with the most stable gesture as the center of the cluster, taking the frame as a reference frame, determining the stitching sequence based on the overlapping degree and the distance, registering the images, eliminating frames with failed registration, returning the frames to the cluster again, and finally stitching the images sequentially and fusing the images to obtain a first layer stitching result; the second layer splicing module is used for taking the posture of the cluster center of each cluster as the posture stability of each splicing resul