CN-118736019-B - Laser radar-camera online self-calibration method based on semantic edge alignment
Abstract
The invention discloses a laser radar-camera online self-calibration method based on semantic edge alignment. Firstly, carrying out semantic segmentation on data acquired by an image and a laser radar to obtain image semantic segmentation and point cloud semantic segmentation results, then extracting edges from the image semantic segmentation results, and extracting the edges from the point cloud semantic segmentation results after the surrounding view transformation and filling. After the point cloud semantic edge is projected to an image pixel coordinate system by using the initial external parameters with deviation, optimizing and correcting the external parameters between the laser radar and the camera by continuously and finely searching the maximum matching score from thick to thin. The invention realizes the online combined external parameter self-calibration independent of the calibration plate, can adapt to semantic class edge matching of any shape, does not depend on specific scenes, has high calibration precision and good reliability, improves the scene adaptability of the online self-calibration of the laser radar and the camera, and has high practical value for accurate multi-sensor fusion of automatic driving.
Inventors
- XIANG ZHIYU
- PANG BOWEN
Assignees
- 浙江大学
Dates
- Publication Date
- 20260508
- Application Date
- 20240617
Claims (7)
- 1. The laser radar-camera online self-calibration method based on semantic edge alignment is characterized by comprising the following steps of: 1) Carrying out semantic segmentation processing on image data acquired by a camera to obtain an image semantic segmentation result, and extracting according to the semantic segmentation result to obtain an image semantic edge; 2) Carrying out semantic segmentation processing on point cloud data acquired by a laser radar to obtain a point cloud semantic segmentation result, and extracting to obtain a point cloud semantic edge according to the point cloud semantic segmentation result; In the step 2), a point cloud semantic edge is extracted and obtained according to a point cloud semantic segmentation result, specifically: Firstly, transforming a point cloud semantic segmentation result into a looking-around view to obtain a sparse looking-around image, wherein the sparse looking-around image comprises 4 channels which respectively represent 3D coordinates of point clouds and semantic prediction results; 3) Performing iterative optimization of external parameters according to the projected 2D point cloud edge and the image semantic edge so that the projected 2D point cloud edge is aligned with the image semantic edge, thereby obtaining a calibrated external parameter matrix; The 3) is specifically as follows: 3.1 According to the current external reference matrix and the internal reference matrix of the camera, projecting the semantic edge of the point cloud onto the image to generate a projected 2D point cloud edge under the camera image coordinate system; 3.2 Counting the number of semantic edge points of the image, which are in the same category as each 2D point cloud projection point, in the edge of each 2D point cloud after projection, and recording the number as the matching number of each 2D point cloud projection point, and recording each 2D point cloud projection point as a matching edge point if the matching number of each 2D point cloud projection point is more than or equal to 1; 3.3 If the ratio of the number of the matched edge points to the number of points in the projected 2D point cloud edge is smaller than a preset threshold value, carrying out calibration abnormity alarm, otherwise, executing 3.4); 3.4 Weighting and summing each 2D point cloud projection point according to semantic category weight and matching quantity to obtain matching confidence coefficient of each 2D point cloud projection point, and taking the result of accumulating the matching confidence coefficient of all 2D point cloud projection points in the projected 2D point cloud edge as the matching confidence coefficient of the current external parameter matrix; 3.5 Repeating 3.1) -3.4), carrying out external parameter searching from coarse to fine, and taking the external parameter matrix with highest matching confidence as the calibrated external parameter matrix.
- 2. The online self-calibration method of a laser radar-camera based on semantic edge alignment according to claim 1, wherein in 1), the image semantic edge is obtained by processing the image semantic segmentation result by using an edge extraction method.
- 3. The online self-calibration method of a laser radar-camera based on semantic edge alignment according to claim 1, wherein in 1), a neural network is used to perform semantic segmentation processing on image data acquired by the camera, so as to obtain an image semantic segmentation result.
- 4. The online self-calibration method of the laser radar-camera based on semantic edge alignment according to claim 1, wherein in the 2), a neural network is used for performing semantic segmentation processing on point cloud data acquired by the laser radar to obtain a point cloud semantic segmentation result.
- 5. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 4 when the computer program is executed.
- 6. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 4.
- 7. A computer program product comprising computer programs/instructions which, when executed by a processor, implement the steps of the method of any of claims 1 to 4.
Description
Laser radar-camera online self-calibration method based on semantic edge alignment Technical Field The invention relates to a laser radar-camera self-calibration method in the technical field of automatic driving and the field of multi-sensor fusion, in particular to a laser radar-camera online self-calibration method based on semantic edge alignment. Background To ensure accurate and stable perception of the surrounding environment, autopilot cars are often equipped with a series of different sensors. Cameras and lidar are two main types of sensors used in autopilot automobiles. The complementary nature of the two sensors makes them the preferred combination of many sensing tasks. The lidar sensor can obtain spatial data in a large range, but has low resolution and no color information, while the camera sensor can obtain RGB images with high resolution, but is sensitive to light and has no distance information. To remedy each other's weaknesses, the combination of lidar and camera sensors has become a typical and indispensable setting for mobile robotic and autopilot automotive applications. As a key prerequisite for the combination, an accurate extrinsic parameter, i.e. the estimation of the transformation matrix between the two sensor coordinate systems, is often also a crucial step. Parameter calibration is usually performed by using artificial targets such as checkerboards. However, due to aging, jolting, collision, etc. of the sensor during running, the accuracy of these calibration parameters may decrease over time. In particular on vehicles, large errors in rotation parameters are more common than errors in translation parameters. Therefore, an accurate and reliable online calibration method is needed, and calibration parameters after offset are calibrated online in the vehicle running process by effectively fusing geometric information and optical information. The existing self-calibration technology generally requires specific structural objects such as upright posts, rods or large planes in a scene or requires priori constraints such as parallel lane lines and the like, and the scene self-calibration on line is not strong in adaptability. The method utilizes semantic edge information of scenes with arbitrary shapes existing in general road scenes, realizes self-calibration through the alignment of the image and the semantic edges in the laser, and greatly improves the robustness of the calibrated scenes. Disclosure of Invention The invention aims to overcome the defects in the prior art, and provides a semantic edge alignment-based laser radar and camera online self-calibration method without a calibration device, which solves the problem of external parameter deviation between sensors in the running process of an automatic driving automobile. In order to achieve the above purpose, the present invention adopts the following technical scheme: 1. laser radar-camera online self-calibration method based on semantic edge alignment 1) Carrying out semantic segmentation processing on image data acquired by a camera to obtain an image semantic segmentation result, and extracting according to the semantic segmentation result to obtain an image semantic edge; 2) Carrying out semantic segmentation processing on point cloud data acquired by a laser radar to obtain a point cloud semantic segmentation result, and extracting to obtain a point cloud semantic edge according to the point cloud semantic segmentation result; 3) And performing iterative optimization of external parameters according to the projected 2D point cloud edge and the image semantic edge, so that the projected 2D point cloud edge is aligned with the image semantic edge, and a calibrated external parameter matrix is obtained. In the step 1), the image semantic edges are obtained by processing the image semantic segmentation result by using an edge extraction method. In the step 2), a point cloud semantic edge is extracted and obtained according to a point cloud semantic segmentation result, specifically: Firstly, transforming a point cloud semantic segmentation result into a looking-around view to obtain a sparse looking-around image, wherein the sparse looking-around image comprises 4 channels which respectively represent 3D coordinates of point clouds and semantic prediction results, then, filling and complementing the sparse looking-around image into a compact image to obtain a compact looking-around image, and extracting edges on the compact looking-around image by using an edge detection method to obtain the point cloud semantic edges. The 3) is specifically as follows: 3.1 According to the current external reference matrix and the internal reference matrix of the camera, projecting the semantic edge of the point cloud onto the image to generate a projected 2D point cloud edge under the camera image coordinate system; 3.2 Counting the number of semantic edge points of the image, which are in the same category as each 2D point cloud projec