Search

CN-121982102-A - Charging cover pose detection method based on machine vision and storage medium

CN121982102ACN 121982102 ACN121982102 ACN 121982102ACN-121982102-A

Abstract

The invention relates to the technical field of image processing, and particularly discloses a charging cover pose detection method and a storage medium based on machine vision, wherein the method comprises the steps of respectively shooting photos of a charging cover of an unlabeled laser line and a labeled laser line by a monocular camera, and recording the photos as a first image and a second image; extracting laser lines according to the first image and the second image, extracting at least three edge characteristic points of the charging cover from the laser lines, calculating characteristic point clouds according to the edge characteristic points, matching the characteristic point clouds with pre-acquired target point clouds to obtain corresponding homogeneous transformation matrixes, and determining pose parameters of the charging cover according to the homogeneous transformation matrixes. According to the invention, the active detection of the charging cover pose is realized by adopting a mode of monocular camera and laser line marking, and the charging cover pose detection method has the core advantages of low cost, low hardware complexity and environmental light interference resistance.

Inventors

  • HOU NING
  • LI ENHU
  • DANG JIANXIN
  • XUE JIAN
  • NING JINGTAO

Assignees

  • 绿能慧充数字技术有限公司

Dates

Publication Date
20260505
Application Date
20260129

Claims (8)

  1. 1. The charging cover pose detection method based on machine vision is characterized by comprising the following steps of: S1, respectively shooting photos of an unlabeled laser line and the charging cover marked with the laser line by using a monocular camera, and recording the photos as a first image and a second image; step S2, extracting the laser line according to the first image and the second image, and extracting at least three edge characteristic points of the charging cover from the laser line; S3, calculating feature point clouds according to the edge feature points; step S4, matching the characteristic point cloud with the pre-acquired target point cloud to obtain a corresponding homogeneous transformation matrix; And S5, determining pose parameters of the charging cover according to the homogeneous transformation matrix.
  2. 2. The machine vision-based charging cover pose detection method according to claim 1, wherein the step S3 comprises the steps of: S31, projecting the laser line on a pre-configured checkerboard calibration plate and passing through corner points of the checkerboard calibration plate; s32, determining the relative pose of the checkerboard calibration plate relative to the monocular camera according to the corner points of the checkerboard calibration plate; Step S33, mapping the corner points positioned on the laser line to the coordinate system of the monocular camera according to the relative pose to obtain candidate coordinates; step S34, under the condition that a plurality of different candidate coordinates are obtained, determining a constraint plane corresponding to the laser line according to all the candidate coordinates; and step S35, determining a characteristic point cloud according to the combination of the edge characteristic points with the constraint plane and the coordinate system of the monocular camera.
  3. 3. The machine vision-based charging cover pose detection method according to claim 2, wherein the step S35 comprises the steps of: step S351, for each edge feature point, determining a mapping constraint ray of the edge feature point according to the edge feature point and a coordinate system of the monocular camera; Step S352, detecting the intersection condition of the mapping constraint ray and the constraint plane, and determining the three-dimensional coordinates of the edge feature points according to the intersection condition; and step S353, acquiring all the sets of the three-dimensional coordinates to obtain a characteristic point cloud.
  4. 4. The machine vision-based charging cover pose detection method according to claim 1, wherein the step S5 comprises the steps of: s51, separating the three-dimensional offset vector and the rotation matrix from the homogeneous transformation matrix; and S52, converting the rotation matrix into Euler angles, so that the three-dimensional offset vector and the Euler angles are used as pose parameters of the charging cover.
  5. 5. The machine vision-based charging cover pose detection method according to claim 1, wherein the step of extracting the laser line according to the first image and the second image in the step S2 includes the steps of: s21, respectively preprocessing the first image and the second image to obtain a first intermediate image and a second intermediate image; Step S22, carrying out the same Gaussian blur processing on the first intermediate image and the second intermediate image respectively to obtain a first target image and a second target image; And S23, subtracting the first target image from the second target image to obtain the laser line.
  6. 6. The machine vision-based charging cover pose detection method according to claim 2, wherein a plurality of different candidate coordinates are obtained by: adjusting the posture of the checkerboard calibration plate for a plurality of times, and repeatedly executing the steps S31 to S33 for a plurality of times; Or alternatively And adjusting the degree of freedom or/and the type of the laser source corresponding to the laser line for a plurality of times, and repeatedly executing the steps S1 to S33 for a plurality of times.
  7. 7. An electronic device, comprising: At least one processor; At least one memory for storing at least one program; The machine vision-based charging cover pose detection method according to any one of claims 1 to 6 is implemented when at least one of the programs is executed by at least one of the processors.
  8. 8. A computer-readable storage medium in which a processor-executable program is stored, the processor-executable program when executed by a processor being configured to implement the machine vision-based charging cover pose detection method according to any one of claims 1 to 6.

Description

Charging cover pose detection method based on machine vision and storage medium Technical Field The invention relates to the technical field of image processing, in particular to a charging cover pose detection method based on machine vision, electronic equipment and a computer readable storage medium. Background ‌ Charging cover is the protection part of new energy automobile interface that charges, mainly used dustproof, waterproof and prevent foreign matter invasion, guarantee safety ‌ charges. At present, stereoscopic vision, such as a binocular camera, a structured light camera or a tof camera, is generally adopted for positioning the charging cover 6DOF pose so as to acquire point cloud data of the charging cover and calculate the charging cover pose, but the binocular camera and the tof camera are sensitive to ambient light and are easy to light interference, and the structured light camera needs to be additionally integrated with a light source module, so that the size is large, the cost is high, and the charging cover pose is not compact enough. Disclosure of Invention The present invention aims to solve at least one of the technical problems in the related art to some extent. Therefore, the invention provides a charging cover pose detection method and a storage medium based on machine vision, and the method and the storage medium have the core advantages of low cost, low hardware complexity and environmental light interference resistance. In a first aspect, an embodiment of the present invention provides a method for detecting a pose of a charging cover based on machine vision, including the following steps: S1, respectively shooting photos of an unlabeled laser line and the charging cover marked with the laser line by using a monocular camera, and recording the photos as a first image and a second image; step S2, extracting the laser line according to the first image and the second image, and extracting at least three edge characteristic points of the charging cover from the laser line; S3, calculating feature point clouds according to the edge feature points; step S4, matching the characteristic point cloud with the pre-acquired target point cloud to obtain a corresponding homogeneous transformation matrix; And S5, determining pose parameters of the charging cover according to the homogeneous transformation matrix. Optionally, in one embodiment of the present invention, the step S3 includes the steps of: S31, projecting the laser line on a pre-configured checkerboard calibration plate and passing through corner points of the checkerboard calibration plate; s32, determining the relative pose of the checkerboard calibration plate relative to the monocular camera according to the corner points of the checkerboard calibration plate; Step S33, mapping the corner points positioned on the laser line to the coordinate system of the monocular camera according to the relative pose to obtain candidate coordinates; step S34, under the condition that a plurality of different candidate coordinates are obtained, determining a constraint plane corresponding to the laser line according to all the candidate coordinates; and step S35, determining a characteristic point cloud according to the combination of the edge characteristic points with the constraint plane and the coordinate system of the monocular camera. Optionally, in one embodiment of the present invention, the step S35 includes the steps of: step S351, for each edge feature point, determining a mapping constraint ray of the edge feature point according to the edge feature point and a coordinate system of the monocular camera; Step S352, detecting the intersection condition of the mapping constraint ray and the constraint plane, and determining the three-dimensional coordinates of the edge feature points according to the intersection condition; and step S353, acquiring all the sets of the three-dimensional coordinates to obtain a characteristic point cloud. Optionally, in one embodiment of the present invention, the step S5 includes the steps of: s51, separating the three-dimensional offset vector and the rotation matrix from the homogeneous transformation matrix; and S52, converting the rotation matrix into Euler angles, so that the three-dimensional offset vector and the Euler angles are used as pose parameters of the charging cover. Optionally, in an embodiment of the present invention, the step in step S2, extracting the laser line according to the first image and the second image includes the following steps: s21, respectively preprocessing the first image and the second image to obtain a first intermediate image and a second intermediate image; Step S22, carrying out the same Gaussian blur processing on the first intermediate image and the second intermediate image respectively to obtain a first target image and a second target image; And S23, subtracting the first target image from the second target image to obtain the laser line. Alternatively, i