Search

CN-121994279-A - Multi-sensor joint calibration method and device

CN121994279ACN 121994279 ACN121994279 ACN 121994279ACN-121994279-A

Abstract

The application relates to a multi-sensor joint calibration method, a device, electronic equipment and a data acquisition system. The method comprises the steps of obtaining sensor data acquired by each sensor in a data acquisition system, determining initial spatial parameters among the sensors according to the sensor data acquired by each sensor, constructing a space-time fusion cost function, wherein the space-time fusion cost function comprises a space alignment error item determined based on the initial spatial parameters and a time synchronization error item used for compensating time delay among the sensors, and based on the space-time fusion cost function, carrying out joint optimization on time delay parameters among the sensors and the initial spatial parameters to obtain a joint calibration result, wherein the joint calibration result comprises the joint calibration spatial parameters among the sensors and the joint calibration time delay parameters. The method can improve the calibration precision.

Inventors

  • ZHOU YANG
  • HE YI
  • SU XINGYI

Assignees

  • 重庆凤凰技术有限公司

Dates

Publication Date
20260508
Application Date
20260317

Claims (10)

  1. 1. A multi-sensor joint calibration method, the method comprising: acquiring sensor data acquired by each sensor in a data acquisition system; Determining preliminary spatial external parameters among the sensors according to the sensor data acquired by the sensors; Constructing a space-time fusion cost function, wherein the space-time fusion cost function comprises a space alignment error term determined based on the preliminary space external parameters and a time synchronization error term used for compensating time delay between the sensors; And based on the space-time fusion cost function, carrying out joint optimization on the time delay parameters between the sensors and the preliminary space external parameters to obtain a joint calibration result, wherein the joint calibration result comprises the joint calibration space external parameters between the sensors and the joint calibration time delay parameters.
  2. 2. The method of claim 1, wherein said constructing a spatiotemporal fusion cost function comprises: Determining a spatial alignment error term based on the preliminary spatial external parameters and the sensor data acquired by each sensor; Determining a time synchronization error term for compensating for a time delay between each of the sensors; and carrying out weighted fusion on the space alignment error item and the time synchronization error item to obtain a space-time fusion cost function.
  3. 3. The method according to claim 1, wherein the performing joint optimization on the time delay parameter between the sensors and the preliminary spatial external parameter based on the space-time fusion cost function to obtain a joint calibration result includes: And performing iterative optimization at least once on the time delay parameters between the sensors and the preliminary spatial external parameters based on the space-time fusion cost function until an iteration ending condition is met, and obtaining a combined calibration result according to the space-time fusion cost function under the condition that the iteration ending condition is met.
  4. 4. The method of claim 1, wherein determining preliminary spatial referencing between the sensors based on the sensor data acquired by each of the sensors comprises: determining a scene type corresponding to the data acquisition system according to the sensor data acquired by each sensor; determining a target feature matching mode according to the scene type; And performing coarse calibration based on the sensor data acquired by each sensor according to the target feature matching mode to obtain a preliminary spatial external parameter between the sensors.
  5. 5. The method of claim 1, wherein each sensor in the data acquisition system comprises a camera, a radar and an inertial measurement unit, wherein determining preliminary spatial outliers between each sensor based on sensor data acquired by each sensor comprises: Determining candidate spatial external parameters between the camera and the radar based on the image data acquired by the camera and the point cloud data acquired by the radar; acquiring a motion constraint condition corresponding to the inertial measurement unit, wherein the motion constraint condition comprises a pose change amount determined based on sensor data acquired by the inertial measurement unit; Correcting the candidate spatial external parameters according to the motion constraint condition to obtain preliminary spatial external parameters between the camera and the radar; and based on the sensor data acquired by the inertial measurement unit, determining preliminary spatial external parameters between the inertial measurement unit and the camera and between the inertial measurement unit and the radar respectively.
  6. 6. The method of claim 1, wherein each sensor in the data acquisition system comprises a camera, a radar and an inertial measurement unit, wherein determining preliminary spatial outliers between each sensor based on sensor data acquired by each sensor comprises: Determining candidate spatial external parameters between the camera and the radar based on the image data acquired by the camera and the point cloud data acquired by the radar; acquiring a motion constraint condition corresponding to the inertial measurement unit, wherein the motion constraint condition comprises a pose change amount determined based on sensor data acquired by the inertial measurement unit; Correcting the candidate spatial external parameters according to the motion constraint condition to obtain initial spatial external parameters between the camera and the radar; determining a scene type corresponding to the data acquisition system according to the image data and the point cloud data; And performing coarse calibration based on the initial spatial external parameters, the image data, the point cloud data and the sensor data acquired by the inertial measurement unit according to a target feature matching mode matched with the scene type, so as to obtain the initial spatial external parameters among the camera, the radar and the inertial measurement unit.
  7. 7. The method of claim 6, wherein determining a scene type corresponding to the data acquisition system from the image data and the point cloud data comprises: Determining scene characteristics corresponding to the data acquisition system according to the image data and the point cloud data, wherein the scene characteristics comprise at least one of image texture entropy, point cloud density or dynamic pixel duty ratio; And determining the scene type corresponding to the data acquisition system based on the scene characteristics.
  8. 8. The method of claim 1, wherein acquiring sensor data acquired by each sensor in the data acquisition system comprises: and triggering all sensors in the data acquisition system to synchronously acquire data through a hardware synchronous triggering unit, so as to obtain sensor data acquired by each sensor.
  9. 9. The method according to any one of claims 1 to 8, further comprising: And returning to the step of determining the preliminary spatial external parameters among the sensors according to the sensor data acquired by the sensors under the condition that the combined calibration result fails to pass the calibration verification until the obtained combined calibration result passes the calibration verification, so as to obtain the target calibration result of the data acquisition system.
  10. 10. A multi-sensor joint calibration device, the device comprising: The sensor data acquisition module is used for acquiring sensor data acquired by each sensor in the data acquisition system; The preliminary space external parameter determining module is used for determining preliminary space external parameters among the sensors according to the sensor data acquired by the sensors; The cost function construction module is used for constructing a space-time fusion cost function, wherein the space-time fusion cost function comprises a space alignment error term determined based on the preliminary space external parameters and a time synchronization error term used for compensating time delay between the sensors; And the joint optimization module is used for carrying out joint optimization on the time delay parameters among the sensors and the preliminary space external parameters based on the space-time fusion cost function to obtain a joint calibration result, wherein the joint calibration result comprises the space external parameters which are jointly calibrated among the sensors and the time delay parameters which are jointly calibrated.

Description

Multi-sensor joint calibration method and device Technical Field The application relates to the technical field of data acquisition, in particular to a multi-sensor joint calibration method, a device, electronic equipment and a data acquisition system. Background In the fields of automatic driving, robot navigation and the like, multi-sensor combined calibration is a key technology for realizing environment sensing and positioning. In the related art, the multi-sensor joint calibration can comprise off-line calibration and on-line calibration, the off-line calibration usually depends on artificial markers such as checkerboard, three-dimensional targets and the like, the off-line calibration is realized by acquiring sensor data under a specific gesture and solving an external parameter matrix by utilizing geometric constraint, and the on-line calibration is realized by a feature matching, motion estimation or deep learning model. However, the accuracy of the multi-sensor joint calibration in the related art is low. Disclosure of Invention Based on the above, the application aims at the technical problems and provides a multi-sensor combined calibration method, a device, electronic equipment and a data acquisition system which can improve the calibration precision. In a first aspect, the present application provides a multi-sensor joint calibration method, including: acquiring sensor data acquired by each sensor in a data acquisition system; determining preliminary spatial external parameters among the sensors according to the sensor data acquired by the sensors; Constructing a space-time fusion cost function, wherein the space-time fusion cost function comprises a space alignment error term determined based on preliminary space external parameters and a time synchronization error term used for compensating time delay among the sensors; Based on the space-time fusion cost function, the time delay parameters and the preliminary spatial parameters between the sensors are subjected to joint optimization, and a joint calibration result is obtained, wherein the joint calibration result comprises the spatial parameters and the time delay parameters of the joint calibration between the sensors. In the multi-sensor combined calibration method, the space-time fusion cost function simultaneously comprising the space alignment error item and the time synchronization error item is constructed, so that the primary space external participation time delay parameter among the sensors is subjected to combined optimization, the problem of calibration precision reduction caused by space-time dislocation is solved, the combined calibration result can reach balance in the space-time dimension, and the precision of multi-sensor combined calibration is improved. In an alternative embodiment of the first aspect, constructing the space-time fusion cost function includes determining a spatial alignment error term based on the preliminary spatial outliers and the sensor data acquired by each sensor, determining a time synchronization error term for compensating for time delays between the sensors, and performing weighted fusion of the spatial alignment error term and the time synchronization error term to obtain the space-time fusion cost function. In the alternative embodiment, the space geometric constraint and the time motion constraint can be fused for joint optimization by fusing the space alignment error term and the time synchronization error term to obtain a space-time fusion cost function, so that the limitation of single-dimension optimization can be effectively relieved, and the calibration precision is improved. In an alternative embodiment of the first aspect, determining the preliminary spatial parameters between the sensors according to the sensor data collected by the sensors includes determining a scene type corresponding to the data collection system according to the sensor data collected by the sensors, determining a target feature matching mode according to the scene type, and performing coarse calibration based on the sensor data collected by the sensors according to the target feature matching mode to obtain the preliminary spatial parameters between the sensors. In the optional embodiment, the scene types are divided by the sensor data collected by the sensor, so that the adaptive images of the fixed matching logic can be reduced, the accuracy of the corresponding relation of the features can be ensured by selecting the corresponding feature matching mode for coarse calibration aiming at different scene types, a reasonable starting point is provided for fine optimization, and the success rate of joint calibration is improved. In an alternative embodiment of the first aspect, each sensor in the data acquisition system comprises a camera, a radar and an inertial measurement unit, the preliminary spatial external parameters between each sensor are determined according to sensor data acquired by each sensor, the prel