Search

CN-121982349-A - Method for realizing efficient image matching

CN121982349ACN 121982349 ACN121982349 ACN 121982349ACN-121982349-A

Abstract

The invention discloses a method for realizing efficient image matching, which comprises the steps of constructing an internal reference matrix by using focal length f, focal length information f35 mm and image width w and image height h, converting pixel coordinates (u, v) of four corners of an image into camera coordinates by the internal reference matrix, converting longitude lon, latitude lat and height alt of the image into ECEF coordinates, constructing an image external reference matrix by using the ECEF coordinates, yaw angle pitch and roll, converting camera coordinates of four corners of the image into ECEF coordinates by using the external reference matrix, converting the four ECEF coordinates into longitude and latitude coordinates, sequentially connecting the four longitude and latitude coordinates to form projection of a contour area of the image, and judging whether two images need to be matched according to the intersection relation of the obtained image contour area. The image matching method can fully utilize unmanned aerial vehicle to collect various information in the pictures, and can effectively reduce meaningless matching and improve matching efficiency on the premise of reducing accumulated errors.

Inventors

  • TAN XING
  • ZHAO YAN
  • LIU CONG
  • ZHOU YAQIN
  • LU ZHENG
  • ZHENG YANG
  • XU YUANMING

Assignees

  • 武汉地大信息工程股份有限公司

Dates

Publication Date
20260505
Application Date
20260126

Claims (6)

  1. 1. A method for realizing efficient image matching is characterized in that a picture outline area is projected to the same plane by utilizing longitude lon, latitude lat, height alt, focal length f, yaw angle yaw, pitch angle pitch and roll angle roll in 35mm focal length information f35 and Xmp information in Exif information in the picture, and whether two pictures need to be matched or not is judged by utilizing an intersection relation of the outline area, wherein the specific matching flow is as follows: S1, constructing an internal reference matrix by using focal length f, 35mm focal length information f35, image width w and image height h; S2, converting pixel coordinates (u, v) of four corners of the image into camera coordinates by using the internal reference matrix constructed in the step S1; s3, converting longitude lon, latitude lat and height alt of the image into ECEF coordinates, and constructing an image external parameter matrix by using the ECEF coordinates, a yaw angle yaw, a pitch angle pitch and a roll angle roll; s4, converting camera coordinates of four corner points of the image into ECEF coordinates by using the external parameter matrix constructed in the step S3, converting the ECEF coordinates into longitude and latitude coordinates, and sequentially connecting the obtained four longitude and latitude coordinates to form projection of a contour area of the picture; And S5, judging whether the two pictures need to be matched according to the intersection relation of the picture outline areas obtained in the step S4.
  2. 2. The method for realizing efficient image matching according to claim 1, wherein the plane is a plane where the flying spot of the unmanned aerial vehicle is located.
  3. 3. The method for efficient matching of images according to claim 1, wherein in step S1, the construction formula of the reference matrix is: ; Where fx=f/dx, fy=f/dy, fx and fy are focal lengths, f is a physical focal length, dx and dy are physical dimensions of each pixel on the sensor, and (cx, cy) is an intersection point coordinate of the optical axis and the image plane.
  4. 4. The method for efficient image matching according to claim 1, wherein in step S2, the specific procedure for converting the pixel coordinates (u, v) into camera coordinates is as follows: S21, converting the camera coordinates (X, Y, Z) into pixel coordinates (u, v) by using an internal reference matrix, wherein the conversion formula is as follows: ; S22, inverting the internal matrix, and calculating an inverse matrix, so that the pixel coordinates (u, v) can be converted into camera coordinates (X, Y, Z), wherein the conversion formula is as follows: ; wherein z=alt-startH, startH is the flying spot height of the unmanned aerial vehicle, and the pixel coordinates (0, 0) (w, h) (0, h) and Z of the four corners of the image can be used to calculate the camera coordinates of the four corners.
  5. 5. A method for efficient matching of images according to claim 1, characterized in that in step S5, when the projection point of any corner of one picture is inside the projection area of another picture, the two pictures intersect.
  6. 6. A method for efficiently matching images according to claim 5 wherein in step S5, when a projection point emits a ray in an arbitrary direction, if the intersection points with all sides of the projection area of the other picture are odd, the projection point is inside the projection area of the other picture, and if the intersection points with all sides of the projection area of the other picture are even, the projection point is outside the projection area of the other picture.

Description

Method for realizing efficient image matching Technical Field The invention relates to the technical field of image matching, in particular to a method for realizing efficient image matching. Background In picture matching, it is generally common to match two by two and global matching. Two-by-two matching is the basic step of image stitching, and feature comparison is performed on only two possibly overlapping images at a time. The method is characterized in that feature points (such as SIFT and ORB) of each image are extracted independently, point corresponding relations are established through calculating similarity of feature descriptors, and then a local transformation model (such as homography matrix) is estimated through RANSAC, wherein the core is locality and parallelism, and the method is suitable for rapid alignment of adjacent images. However, any one of the methods has certain defects that accumulated errors are gradually amplified when multiple images are spliced in series, so that end images are seriously misplaced (such as the head and the tail cannot be closed), missing weak correlations is that only adjacent image pairs are matched, non-adjacent images which are overlapped (such as unordered photo sets) can be ignored, so that partial images cannot be correctly fused into the panorama, and redundant calculation is that unordered image splicing requires violently searching all possible image pairs, so that the calculation complexity is high. Global matching refers to matching all pictures pairwise, and integrating the results of all pairwise matching to perform joint optimization. The method solves the geometric constraint of all matched pairs uniformly through closed loop detection (such as head-to-tail image matching) and binding adjustment (Bundle Adjustment), optimizes the transformation parameters of each image, and minimizes the global reprojection error, thereby eliminating accumulated drift and ensuring the overall consistency. However, the method has the defects of high calculation complexity, large-scale nonlinear optimization problem (related to all matching points) of binding adjustment to be solved, exponential increase of calculation overhead along with the number of images, dependence on initial estimation, and complex realization, namely, the need of designing modules of closed loop detection, graph optimization and the like, and engineering difficulty is higher than that of simple pairwise matching, wherein if a large number of mismatching exists in pairwise matching, global optimization can be converged to the missolution. For example, the following 10 pictures are arranged: 01 02 03 04 05 10 09 08 07 06 The two-by-two matching method is that if 1-10 ten pictures are provided, the two-by-two matching is that the picture 1 is matched with the picture 2, the picture 2 is matched with the picture 3, the picture 3 is matched with the picture 4, and the like. If 1 and 2 match and 2 and 3 match, then it may be considered to match 1 and 3 again. The disadvantage of this matching is that the matching is incomplete, as in the picture arrangement order described above, then it is possible for both 01 and 10, 09 to match, but the matching is ignored for every two matches. The global matching method is that if 1-10 pictures are provided, all pictures are matched pairwise in a global matching way. For example, picture 1 is matched with 2, 3..10, picture 2 is matched with 3, 4..10, picture 3 is matched with 4, 5..10, etc. The matching is complete, but the matching times are large, and mismatching is more likely to occur, for example, the identical ground objects exist in the picture 1 and the picture 6, and as a result, the matching of the picture 1 and the picture 6 is caused, so that the subsequent judgment is influenced. Disclosure of Invention Aiming at the technical problems, the invention provides a method for realizing efficient image matching, which can fully utilize unmanned aerial vehicles to collect various information in pictures, and can effectively reduce meaningless matching and improve matching efficiency on the premise of reducing accumulated errors. The method for realizing the efficient image matching utilizes longitude lon, latitude lat, height alt, focal length f, 35mm focal length information f35 and Xmp yaw angle yaw, pitch angle pitch and roll angle roll in Exif information in the images to project the outline area of the images to the same plane, and utilizes the intersection relationship of the outline area to judge whether the two images need to be matched, wherein the specific matching process comprises the following steps: S1, constructing an internal reference matrix by using focal length f, 35mm focal length information f35, image width w and image height h; S2, converting pixel coordinates (u, v) of four corners of the image into camera coordinates by using the internal reference matrix constructed in the step S1; s3, converting longitude lon, latitude la