Search

CN-122023540-A - Depth camera external parameter calibration method, device and storage medium

CN122023540ACN 122023540 ACN122023540 ACN 122023540ACN-122023540-A

Abstract

The application discloses a depth camera external parameter calibration method, equipment and a storage medium, and relates to the technical field of camera calibration, comprising the steps of acquiring a calibration box and scene depth point cloud data of the horizontal ground where the calibration box is positioned through a depth camera of a robot; and carrying out iterative optimization on external parameter data of the depth camera based on the observation point cloud coordinates, the observation normal vector, the reference coordinates of the target vertex and the reference normal vector of the horizontal ground to obtain the target external parameter of the depth camera, wherein the reference coordinates of the target vertex are the coordinates of the target vertex calibrated in the robot coordinate system, and the reference normal vector is the normal vector of the horizontal ground calibrated in the robot coordinate system. The application solves the technical problem that the external parameter calibration effect is poor in the external parameter calibration technology of the depth camera of the mobile robot at present.

Inventors

  • GU ZHENJIANG
  • HUANG WENHUA

Assignees

  • 优地机器人(无锡)股份有限公司

Dates

Publication Date
20260512
Application Date
20251231

Claims (10)

  1. 1. The depth camera external parameter calibration method is characterized by comprising the following steps of: acquiring scene depth point cloud data of a calibration box and the horizontal ground on which the calibration box is positioned through a depth camera of the robot; Extracting observation point cloud coordinates of a target vertex of the calibration box and an observation normal vector of the horizontal ground from the scene depth point cloud data; and performing iterative optimization on external parameter data of the depth camera based on the observation point cloud coordinates, the observation normal vector, the reference coordinates of the target vertex and the reference normal vector of the horizontal ground to obtain the target external parameter of the depth camera, wherein the reference coordinates of the target vertex are the coordinates of the target vertex calibrated in a robot coordinate system, and the reference normal vector is the normal vector of the horizontal ground calibrated in the robot coordinate system.
  2. 2. The depth camera external parameter calibration method according to claim 1, wherein the step of extracting the observation point cloud coordinates of the target vertex of the calibration box and the observation normal vector of the horizontal ground from the scene depth point cloud data comprises: separating a first point cloud part and a second point cloud part from the scene depth point cloud data, wherein the first point cloud part corresponds to the calibration box, and the second point cloud part corresponds to the horizontal ground; Extracting the observation point cloud coordinates of the target vertex of the calibration box based on the first point cloud part; and extracting the observation normal vector of the horizontal ground based on the second point cloud part.
  3. 3. The depth camera extrinsic reference calibration method according to claim 2, wherein the step of extracting an observation point cloud coordinate of a target vertex of the calibration box based on the first point cloud portion includes: Performing plane feature analysis on the first point cloud part to obtain a first plane and a second plane, wherein the first plane is a main surface of the calibration box, which is opposite to the depth camera, and the second plane is a top surface connected with the main surface in the calibration box, and the first plane and the second plane are mutually perpendicular; Acquiring a space intersection line between the first plane and the second plane, and screening a candidate point cloud set from the first point cloud part, wherein the distance from each candidate point cloud in the candidate point cloud set to the space intersection line is smaller than a preset distance threshold; And calculating the space distance between any two candidate point clouds in the candidate point cloud set, determining a group of candidate point cloud pairs with the largest space distance, and taking the three-dimensional space coordinates of the candidate point cloud pairs as the observation point cloud coordinates of the target vertex of the calibration box respectively.
  4. 4. The depth camera extrinsic calibration method according to claim 2, wherein said step of extracting an observation normal vector of said level ground based on said second point cloud portion includes: calculating to obtain a best fit plane based on all data points in the second point cloud part; and extracting a normal vector of the best fit plane from a standard equation of the best fit plane, and carrying out normalization processing on the normal vector to obtain an observation normal vector of the horizontal ground.
  5. 5. The depth camera extrinsic parameter calibration method according to claim 1, wherein the step of iteratively optimizing extrinsic parameter data of the depth camera based on the observation point cloud coordinates, the observation normal vector, the reference coordinates of the target vertex, and the reference normal vector of the horizontal ground, to obtain the target extrinsic parameter of the depth camera includes: Processing the observation point cloud coordinates based on the external reference data of the depth camera to obtain conversion point cloud coordinates, and calculating the position deviation of the conversion point cloud coordinates and the reference coordinates; Processing the observation normal vector based on the external reference data of the depth camera to obtain a conversion normal vector, and calculating the direction deviation of the conversion normal vector and the reference normal vector; And calculating conversion loss based on the position deviation and the direction deviation, and iteratively optimizing the external parameter data based on the conversion loss to obtain the target external parameter.
  6. 6. The depth camera extrinsic reference calibration method according to claim 5, wherein said step of processing said observation point cloud coordinates based on extrinsic data of said depth camera to obtain conversion point cloud coordinates, and calculating a positional deviation of said conversion point cloud coordinates from said reference coordinates includes: Rotating and translating the observation point cloud coordinates through the external parameter data to obtain converted point cloud coordinates; and calculating the space linear distance between the transformation point cloud coordinate and the reference coordinate to obtain the position deviation.
  7. 7. The depth camera outlier calibration method of claim 5, wherein the step of processing the observation normal vector based on outlier data of the depth camera to obtain a conversion normal vector, and calculating a directional deviation of the conversion normal vector from the reference normal vector comprises: Rotating the observation normal vector through the external parameter data to obtain a conversion normal vector; and calculating the angle difference between the conversion normal vector and the reference normal vector to obtain the direction deviation.
  8. 8. The depth camera outlier calibration method of claim 5, wherein the step of calculating a conversion loss based on the position deviation and the direction deviation and iteratively optimizing the outlier data based on the conversion loss to obtain the target outlier comprises: Accumulating the position deviation and the direction deviation to obtain conversion loss; adjusting the external reference data based on the conversion loss to obtain new external reference data; taking the new external parameter data as the target external parameter under the condition that the new external parameter data meets the preset iterative optimization condition; And under the condition that the new external parameter data does not meet the preset iterative optimization condition, executing the step and the subsequent steps of processing the observation point cloud coordinates based on the external parameter data of the depth camera based on the new external parameter data.
  9. 9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the computer program being configured to implement the steps of the depth camera extrinsic calibration method according to any one of claims 1 to 8.
  10. 10. A storage medium, characterized in that the storage medium is a computer-readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, realizes the steps of the depth camera extrinsic calibration method according to any one of claims 1 to 8.

Description

Depth camera external parameter calibration method, device and storage medium Technical Field The present application relates to the field of camera calibration technologies, and in particular, to a depth camera external parameter calibration method, apparatus, and storage medium. Background Along with the wide application of mobile robot technology, the depth camera is used as a core sensor for sensing the environment, the calibration of the accurate coordinate relation (namely external parameters) between the installation position and the robot body is very important, and the positioning, navigation and operation precision of the robot are directly affected. In the prior art practice, the common calibration method only depends on a single plane feature (such as the ground) to carry out constraint, and although the flow is simple and quick, the provided constraint information is limited, the complete spatial attitude and position parameters of the depth camera cannot be solved, or the constraint is increased by introducing multi-sensor data such as a laser radar and the like and registering with the depth point cloud, and the calibration integrity can be improved, but the system constitution and the calibration flow are complicated, and the method is not suitable for a light or low-cost robot platform only provided with the depth camera. Therefore, in the current external parameter calibration technology of the depth camera of the mobile robot, the problem of poor external parameter calibration effect exists. The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present application and is not intended to represent an admission that the foregoing is prior art. Disclosure of Invention The application mainly aims to provide a depth camera external parameter calibration method, equipment and a storage medium, and aims to solve the technical problem that the external parameter calibration effect is poor in the current external parameter calibration technology of a mobile robot depth camera. In order to achieve the above object, the present application provides a depth camera external parameter calibration method, which includes: acquiring scene depth point cloud data of a calibration box and the horizontal ground on which the calibration box is positioned through a depth camera of the robot; Extracting observation point cloud coordinates of a target vertex of the calibration box and an observation normal vector of the horizontal ground from the scene depth point cloud data; and performing iterative optimization on external parameter data of the depth camera based on the observation point cloud coordinates, the observation normal vector, the reference coordinates of the target vertex and the reference normal vector of the horizontal ground to obtain the target external parameter of the depth camera, wherein the reference coordinates of the target vertex are the coordinates of the target vertex calibrated in a robot coordinate system, and the reference normal vector is the normal vector of the horizontal ground calibrated in the robot coordinate system. In an embodiment, the step of extracting the observation point cloud coordinates of the target vertex of the calibration box and the observation normal vector of the horizontal ground from the scene depth point cloud data includes: separating a first point cloud part and a second point cloud part from the scene depth point cloud data, wherein the first point cloud part corresponds to the calibration box, and the second point cloud part corresponds to the horizontal ground; Extracting the observation point cloud coordinates of the target vertex of the calibration box based on the first point cloud part; and extracting the observation normal vector of the horizontal ground based on the second point cloud part. In an embodiment, the step of extracting the observation point cloud coordinates of the target vertex of the calibration box based on the first point cloud portion includes: Performing plane feature analysis on the first point cloud part to obtain a first plane and a second plane, wherein the first plane is a main surface of the calibration box, which is opposite to the depth camera, and the second plane is a top surface connected with the main surface in the calibration box, and the first plane and the second plane are mutually perpendicular; Acquiring a space intersection line between the first plane and the second plane, and screening a candidate point cloud set from the first point cloud part, wherein the distance from each candidate point cloud in the candidate point cloud set to the space intersection line is smaller than a preset distance threshold; And calculating the space distance between any two candidate point clouds in the candidate point cloud set, determining a group of candidate point cloud pairs with the largest space distance, and taking the three-dimensional space coordinates of the