Search

CN-122023119-A - Wide parallax image splicing method based on polar constraint and structure retention

CN122023119ACN 122023119 ACN122023119 ACN 122023119ACN-122023119-A

Abstract

The invention discloses a wide parallax image splicing method based on epipolar constraint and structure maintenance. The method comprises the steps of firstly, carrying out feature extraction and matching on image pairs to obtain initial feature point pairs and feature line pairs, eliminating mismatching feature point pairs, then, utilizing the feature line pairs to generate candidate feature point pairs, carrying out coplanarity screening on the candidate feature point pairs through epipolar constraint to construct a geometric consistency feature matching set, then, utilizing the geometric consistency feature matching set to estimate a global homography transformation matrix and pre-align, uniformly dividing an image to be spliced into a plurality of grid areas and optimizing the grid areas to calculate a local homography transformation matrix of the grid areas, finally, estimating the global similarity transformation matrix and combining the global similarity transformation matrix with the local homography transformation matrix to obtain a transformation matrix of the grid areas, carrying out projection on the image to be spliced by utilizing the transformation matrix of the grid areas, and then carrying out fusion to generate a panoramic image. By combining global and local transformations, the accuracy of wide parallax image stitching is improved.

Inventors

  • YANG CHEN
  • WANG LIKUI

Assignees

  • 河北工业大学

Dates

Publication Date
20260512
Application Date
20260203

Claims (3)

  1. 1. The wide parallax image splicing method based on epipolar constraint and structure maintenance is characterized by comprising the following steps of: The first step, extracting and matching features of an image to be spliced and a reference image to obtain an initial feature point pair and a feature line pair, screening the initial feature point pair, and eliminating the mismatching feature point pair; Step two, constructing a geometric consistency feature matching set; For any pair of characteristic lines And (3) with Respectively extracting characteristic lines And (3) with The corresponding end points and the middle points of the characteristic line pairs respectively form candidate characteristic point pairs; 、 feature line sets formed by all feature lines of the image to be spliced and the reference image respectively; From a set of feature lines Is selected from the group consisting of feature lines Coplanar matching feature lines And obtain characteristic line pairs And (3) with Intersection of projection lines From a set of characteristic lines Is selected from the group consisting of feature lines Coplanar matching feature lines And obtain characteristic line pairs And (3) with Intersection of projection lines Point (C) And (3) with Forming candidate feature point pairs; Traversing all the characteristic line pairs to obtain all candidate characteristic point pairs, and calculating coplanarity scores of the candidate characteristic point pairs according to the formula (1): (1) Wherein, the Represent the first Candidate feature point pairs And (3) with Is used for the co-planarity of the substrate, Representing the euclidean distance between a point and a line, 、 Respectively as characteristic points 、 Is used for the polar line of the (a), 、 Characteristic points in the image to be spliced and the reference image are respectively; candidate feature point pairs with coplanarity scores lower than a coplanarity score threshold value are used as coplanarity matching feature point pairs and reserved, and the coplanarity matching feature point pairs are combined with the feature point pairs and the feature line pairs obtained in the first step to obtain a geometric consistency feature matching set; The third step, estimating a global homography transformation matrix by utilizing a geometric consistency feature matching set, pre-aligning an image to be spliced with a reference image by utilizing the global homography transformation matrix, uniformly dividing the image to be spliced into a plurality of grid areas, optimizing the grid areas by utilizing a multi-constraint energy function to obtain the vertex coordinates of the optimized grid areas, and calculating the local homography transformation matrix of the grid areas according to the vertex coordinates of the optimized grid areas; estimating a global similarity transformation matrix by utilizing feature points in the geometric consistency feature matching set, and respectively combining the global similarity transformation matrix with the local homography transformation matrix of each grid region through a method (10) to obtain transformation matrices of each grid region; (10) Wherein, the 、 Respectively the first A transformation matrix and a local homography transformation matrix for each grid region, The weight coefficient is represented by a number of weight coefficients, Transforming the matrix for global similarity; And according to the transformation matrix of each grid region, projecting the image to be spliced into the same coordinate system as the reference image in regions to obtain a projection image to be spliced, and fusing the overlapping regions of the projection image to be spliced and the reference image to generate a panoramic image.
  2. 2. The epipolar constraint and structure preserving wide parallax image stitching method according to claim 1, wherein the multi-constraint energy function Expressed as: (5) (6) (7) (8) Wherein, the Representing the optimized grid region vertex coordinate matrix, For the feature point alignment item, The term is reserved for the feature line, In order to be a distortion control term, As a matrix of weights, the weight matrix, For feature points in reference images The vertex coordinate matrix of the mesh region where it is located, For the number of pairs of feature points, The L2 norm is represented by the number, Representing characteristic lines Is the first of (2) The difference in slope between the line segment and the previous line segment, Is a characteristic line Is used to determine the normalized vector of (c), A sum of boundary lengths representing the mesh region calculates a function, 、 Respectively represent the first 、 Vertex coordinate matrices of the mesh regions.
  3. 3. The wide parallax image stitching method based on epipolar constraint and structure preservation according to claim 1 or 2, wherein a global homography transformation matrix Expressed as: (4) Wherein, the As a feature point in the images to be stitched, Is a characteristic line Is formed by the two end points of the matrix, Is the number of feature line pairs.

Description

Wide parallax image splicing method based on polar constraint and structure retention Technical Field The invention belongs to the technical field of image processing, and particularly relates to a wide parallax image stitching method based on epipolar constraint and structure maintenance. Background The image stitching technology is to register and fuse a plurality of images with overlapping areas to generate a panoramic image with high resolution or large view field, and has been widely applied to scenes such as panoramic imaging, remote sensing mapping, virtual reality, unmanned systems and the like. High-quality panoramic images can provide complete visual information for tasks such as scene understanding and target recognition, and therefore image stitching is a key technology of image processing tasks. The existing image stitching method is mainly based on feature point matching and global homography transformation model estimation to achieve image registration, and the method can achieve good effects when parallax is small or scene approximation is achieved and plane assumption is met. However, for the stitching of wide parallax images, due to the large difference of shooting angles, significant scene depth variation and the ubiquitous non-coplanar structure, a single global homography transformation model is difficult to accurately describe the real geometric relationship between images, and the problems of registration error accumulation, structural distortion, obvious seams and the like are easily caused. Part of the existing methods introduce local deformation or multi-model fusion to improve registration accuracy, but still have the problems of insufficient constraint of geometric consistency of feature matching, limited structure holding capacity, higher computational complexity and the like, and are difficult to consider splice accuracy, structural consistency and computational efficiency. Disclosure of Invention Aiming at the defects of the prior art, the invention aims to provide a wide parallax image splicing method based on polar constraint and structure maintenance. By effectively utilizing multi-type characteristic information (comprising characteristic points and characteristic lines), epipolar constraint is introduced, global and local transformation is simultaneously considered, the precision and stability of image stitching are improved, and the method is particularly suitable for wide parallax image stitching with obvious depth change and complex geometric structure. The invention solves the technical problems by adopting the following technical scheme: the wide parallax image splicing method based on epipolar constraint and structure maintenance is characterized by comprising the following steps of: The first step, extracting and matching features of an image to be spliced and a reference image to obtain an initial feature point pair and a feature line pair, screening the initial feature point pair, and eliminating the mismatching feature point pair; Step two, constructing a geometric consistency feature matching set; For any pair of characteristic lines And (3) withRespectively extracting characteristic linesAnd (3) withThe corresponding end points and the middle points of the characteristic line pairs respectively form candidate characteristic point pairs;、 feature line sets formed by all feature lines of the image to be spliced and the reference image respectively; From a set of feature lines Is selected from the group consisting of feature linesCoplanar matching feature linesAnd obtain characteristic line pairsAnd (3) withIntersection of projection linesFrom a set of characteristic linesIs selected from the group consisting of feature linesCoplanar matching feature linesAnd obtain characteristic line pairsAnd (3) withIntersection of projection linesPoint (C)And (3) withForming candidate feature point pairs; Traversing all the characteristic line pairs to obtain all candidate characteristic point pairs, and calculating coplanarity scores of the candidate characteristic point pairs according to the formula (1): (1) Wherein, the Represent the firstCandidate feature point pairsAnd (3) withIs used for the co-planarity of the substrate,Representing the euclidean distance between a point and a line,、Respectively as characteristic points、Is used for the polar line of the (a),、Characteristic points in the image to be spliced and the reference image are respectively; candidate feature point pairs with coplanarity scores lower than a coplanarity score threshold value are used as coplanarity matching feature point pairs and reserved, and the coplanarity matching feature point pairs are combined with the feature point pairs and the feature line pairs obtained in the first step to obtain a geometric consistency feature matching set; The third step, estimating a global homography transformation matrix by utilizing a geometric consistency feature matching set, pre-aligning an image to be spli