Search

CN-116665084-B - Feature point processing method and related device

CN116665084BCN 116665084 BCN116665084 BCN 116665084BCN-116665084-B

Abstract

The application discloses a feature point processing method which is applied to the technical field of image processing. In the method, rotation information of the image acquisition equipment is acquired when the image frames are acquired, rotation transformation processing is carried out on feature point pairs in the images, so that rotation information among the feature point pairs is eliminated, motion components of rotation dimensions introduced by the image acquisition equipment are offset fundamentally, effectiveness of a method for filtering the feature point pairs based on a threshold value of translation dimensions is improved, and accuracy of the feature point pairs obtained after filtering is guaranteed.

Inventors

  • LU WEI
  • NIE SHIYUE
  • LIN HUAN
  • ZHOU ZHENKUN

Assignees

  • 华为技术有限公司

Dates

Publication Date
20260508
Application Date
20220218

Claims (17)

  1. 1. A feature point processing method, characterized by comprising: Performing feature point matching on the first image and the second image to obtain a plurality of feature point pairs, wherein each feature point pair in the plurality of feature point pairs comprises one feature point in the first image and one feature point in the second image; Acquiring a first rotation angle and a second rotation angle, wherein the first rotation angle is the rotation angle of the image acquisition equipment when acquiring the first image, and the second rotation angle is the rotation angle of the image acquisition equipment when acquiring the second image; determining a pose conversion matrix based on an internal reference matrix, the first rotation angle and the second rotation angle of the image acquisition equipment, wherein the pose conversion matrix is used for eliminating rotation between characteristic point pairs; Performing pose conversion on the plurality of feature point pairs according to the pose conversion matrix to obtain feature point pairs after the pose conversion, wherein each feature point pair of the feature point pairs after the pose conversion is in the same pose under the rotation dimension; and filtering the feature point pairs after the pose transformation based on a first translation threshold value to obtain a plurality of filtered feature point pairs.
  2. 2. The method according to claim 1, wherein the first rotation angle includes a plurality of first sub rotation angles, which are rotation angles when the image capturing device captures feature points belonging to the first image in the plurality of feature point pairs, respectively, and the second rotation angle includes a plurality of second sub rotation angles, which are rotation angles when the image capturing device captures feature points belonging to the second image in the plurality of feature point pairs, respectively; the determining a pose conversion matrix based on the first rotation angle and the second rotation angle includes: acquiring a reference rotation angle; determining pose conversion matrixes corresponding to the feature points belonging to the first image in the plurality of feature point pairs based on the plurality of first sub-rotation angles and the reference rotation angle; and determining pose conversion matrixes corresponding to the feature points belonging to the second image in the plurality of feature point pairs based on the plurality of second sub-rotation angles and the reference rotation angle.
  3. 3. The method of claim 1, wherein the obtaining the first rotation angle and the second rotation angle comprises: acquiring a first angular velocity of the image acquisition device when acquiring the first image and a second angular velocity of the image acquisition device when acquiring the second image; Determining a first rotation angle according to the rotation angle of the image acquisition device when acquiring a third image, the first angular velocity and a first time interval, wherein the third image is a previous frame image of the first image, and the first time interval is acquisition interval duration between the third image and the first image; and determining the second rotation angle according to the rotation angle of the image acquisition device when acquiring a fourth image, the second angular velocity and a second time interval, wherein the fourth image is a previous frame image of the second image, and the second time interval is acquisition interval duration between the fourth image and the second image.
  4. 4. A method according to any of claims 1-3, wherein the first translation threshold is related to a motion state of the image acquisition device when acquiring an image related to the second image; wherein the image related to the second image includes a second image, a first N frames of the second image, and a second M frames of the second image, where N and M are integers greater than or equal to 0.
  5. 5. The method according to claim 4, wherein the method further comprises: Acquiring a triaxial rotation angular velocity of the image acquisition device when acquiring an image related to the second image and a first motion degree value of the image acquisition device when acquiring the first image; determining a second motion degree value of the image acquisition device at the second image based on the triaxial rotation angular velocity and the first motion degree value; a first translation threshold corresponding to the second degree of motion value is determined among a plurality of thresholds.
  6. 6. The method of claim 5, wherein the method further comprises: acquiring triaxial acceleration of the image acquisition device when acquiring an image related to the second image; The determining a second motion degree value of the image acquisition device when the image is based on the triaxial acceleration and the first motion degree value comprises: A second degree of motion value of the image capturing device at the second image is determined based on the triaxial acceleration, the triaxial rotational angular velocity and the first degree of motion value.
  7. 7. A method according to any one of claims 1-3, wherein the performing feature point matching on the first image and the second image to obtain a plurality of feature point pairs includes: Extracting characteristic points in the first image and the second image to obtain a plurality of characteristic points; matching the plurality of feature points to obtain a plurality of pairs of original feature point pairs; Filtering the plurality of pairs of original characteristic point pairs according to a characteristic point pair filtering method to obtain a plurality of characteristic point pairs; the characteristic point pair filtering method comprises one or more of a standard deviation filtering method, a fixed threshold filtering method and a random sampling consistency filtering method.
  8. 8. A feature point processing apparatus, comprising: the processing module is used for carrying out feature point matching on the first image and the second image to obtain a plurality of feature point pairs, wherein each feature point pair in the plurality of feature point pairs comprises one feature point in the first image and one feature point in the second image; The acquisition module is used for acquiring a first rotation angle and a second rotation angle, wherein the first rotation angle is the rotation angle of the image acquisition equipment when acquiring the first image, and the second rotation angle is the rotation angle of the image acquisition equipment when acquiring the second image; The processing module is further used for determining a pose conversion matrix based on an internal reference matrix, the first rotation angle and the second rotation angle of the image acquisition equipment, and the pose conversion matrix is used for eliminating rotation between characteristic point pairs; The processing module is further configured to perform pose conversion on the plurality of feature point pairs according to the pose conversion matrix to obtain feature point pairs after pose conversion, where each feature point pair of the plurality of feature point pairs after pose conversion is in the same pose in the rotation dimension; the processing module is further configured to filter the feature point pairs after the pose transformation based on a first translation threshold, to obtain a plurality of filtered feature point pairs.
  9. 9. The apparatus according to claim 8, wherein the first rotation angle includes a plurality of first sub rotation angles, the plurality of first sub rotation angles being rotation angles when the image capturing device captures feature points belonging to the first image in the plurality of feature point pairs, respectively, the second rotation angle includes a plurality of second sub rotation angles, the plurality of second sub rotation angles being rotation angles when the image capturing device captures feature points belonging to the second image in the plurality of feature point pairs, respectively; the processing module is specifically used for acquiring a reference rotation angle; determining pose conversion matrixes corresponding to the feature points belonging to the first image in the plurality of feature point pairs based on the plurality of first sub-rotation angles and the reference rotation angle; and determining pose conversion matrixes corresponding to the feature points belonging to the second image in the plurality of feature point pairs based on the plurality of second sub-rotation angles and the reference rotation angle.
  10. 10. The apparatus of claim 8, wherein the acquisition module is further configured to acquire a first angular velocity of the image acquisition device when acquiring the first image and a second angular velocity of the image acquisition device when acquiring the second image; The processing module is further configured to determine the first rotation angle according to a rotation angle of the image acquisition device when acquiring a third image, the first angular velocity, and a first time interval, where the third image is a previous frame image of the first image, and the first time interval is an acquisition interval duration between the third image and the first image; the processing module is further configured to determine the second rotation angle according to a rotation angle of the image acquisition device when acquiring a fourth image, the second angular velocity, and a second time interval, where the fourth image is a previous frame image of the second image, and the second time interval is an acquisition interval duration between the fourth image and the second image.
  11. 11. The apparatus of any of claims 8-10, wherein the first translation threshold is related to a motion state of the image acquisition device when acquiring an image related to the second image; wherein the image related to the second image includes a second image, a first N frames of the second image, and a second M frames of the second image, where N and M are integers greater than or equal to 0.
  12. 12. The apparatus of claim 11, wherein the device comprises a plurality of sensors, The acquisition module is further used for acquiring a triaxial rotation angular velocity of the image acquisition device when acquiring the image related to the second image and a first motion degree value of the image acquisition device when acquiring the first image; the processing module is further used for determining a second motion degree value of the image acquisition device when the image acquisition device is in the second image based on the triaxial rotation angular velocity and the first motion degree value; The processing module is further configured to determine a first translation threshold value corresponding to the second motion level value from a plurality of threshold values.
  13. 13. The apparatus of claim 12, wherein the device comprises a plurality of sensors, The acquisition module is further used for acquiring triaxial acceleration of the image acquisition equipment when acquiring the image related to the second image; the processing module is further configured to determine a second motion degree value of the image capturing device when the second image is based on the triaxial acceleration, the triaxial rotation angular velocity, and the first motion degree value.
  14. 14. The apparatus of any of claims 8-10, wherein the processing module is further configured to: Extracting characteristic points in the first image and the second image to obtain a plurality of characteristic points; matching the plurality of feature points to obtain a plurality of pairs of original feature point pairs; Filtering the plurality of pairs of original characteristic point pairs according to a characteristic point pair filtering method to obtain a plurality of characteristic point pairs; the characteristic point pair filtering method comprises one or more of a standard deviation filtering method, a fixed threshold filtering method and a random sampling consistency filtering method.
  15. 15. A feature point processing apparatus comprising a memory storing code and a processor configured to execute the code, the feature point processing apparatus performing the method of any of claims 1 to 7 when the code is executed.
  16. 16. A computer storage medium storing instructions which, when executed by a computer, cause the computer to carry out the method of any one of claims 1 to 7.
  17. 17. A computer program product, characterized in that it stores instructions that, when executed by a computer, cause the computer to implement the method of any one of claims 1 to 7.

Description

Feature point processing method and related device Technical Field The present application relates to the field of image processing technologies, and in particular, to a feature point processing method and a related device. Background Feature extraction and matching of images are an important task in many computer vision applications, and are widely used in the fields of object detection, image retrieval, image recognition, video tracking, image stitching, three-dimensional reconstruction, and the like. Because the accuracy of feature matching has a great influence on the subsequent image processing task, and a lot of mismatching information often exists in the preliminary feature matching result, how to effectively filter the mismatching feature points is an important task in the feature matching process. At present, most of the traditional filtering algorithms consider the tiny motion change between frames, consider the characteristic regionality in the characteristic point extraction and matching stage, and then filter the characteristic point pairs with overlarge distance by using a fixed threshold value. However, in some complex scenes where the image capturing devices move faster or moving objects exist, the variation between image frames is large, and it is difficult for the conventional filtering algorithm to implement effective feature point pair filtering, so that the accuracy of the finally obtained feature point pair is low. Disclosure of Invention The application provides a feature point processing method and a related device, which are used for carrying out rotation transformation processing on feature point pairs in an image by acquiring rotation information of image acquisition equipment when the image frames are acquired so as to eliminate rotation information among the feature point pairs and radically offset motion components of rotation dimension introduced by the image acquisition equipment, thereby improving the effectiveness of a threshold value filtering feature point pair method based on translation dimension and ensuring the accuracy of the feature point pairs obtained after filtering. The first aspect of the application provides a feature point processing method, which comprises the steps of carrying out feature point matching on a first image and a second image to obtain a plurality of feature point pairs, wherein each feature point pair in the plurality of feature point pairs comprises one feature point in the first image and one feature point in the second image. Wherein the first image and the second image are two images for which feature point matching needs to be performed. For example, the first image and the second image are two image frames that are consecutively acquired by the image acquisition device, or the first image and the second image are two consecutive image frames in a piece of video that is acquired by the image acquisition device. The method comprises the steps of obtaining a first rotation angle and a second rotation angle, wherein the first rotation angle is the rotation angle of the image acquisition device when acquiring the first image, and the second rotation angle is the rotation angle of the image acquisition device when acquiring the second image. The first rotation angle and the second rotation angle both comprise rotation angles in three directions, namely three-dimensional rotation angles. And determining a pose conversion matrix based on the first rotation angle and the second rotation angle, wherein the pose conversion matrix is used for eliminating rotation between the characteristic point pairs. The pose conversion matrix may be used to convert the pose of the feature point in the first image to the pose of the feature point in the second image, i.e. to convert the pose of the feature point in the first image, or may be used to convert the pose of the feature point in the second image to the pose of the feature point in the first image, i.e. to convert the pose of the feature point in the second image. And performing pose conversion on the plurality of feature point pairs according to the pose conversion matrix to obtain feature point pairs after the pose conversion, wherein each feature point pair of the feature point pairs after the pose conversion is in the same pose under the rotation dimension. And filtering the feature point pairs after the pose transformation based on a first translation threshold value to obtain a plurality of filtered feature point pairs. The first translation threshold may be a preset threshold, and may be adapted to most of the moving scenes and the scenes where the moving objects exist. In the scheme, the rotation information of the image acquisition equipment when the image frames are acquired is acquired, the feature point pairs in the image are subjected to rotation transformation processing, so that the rotation information between the feature point pairs is eliminated, the motion component of the rotation dim