CN-122015806-A - Mapping method, device, equipment and product based on laser point cloud and image data
Abstract
The invention discloses a mapping method, device, equipment and product based on laser point cloud and image data, and relates to the technical field of multi-sensor data fusion. The method comprises the steps of obtaining laser point cloud data, image data, angular velocity data and acceleration data set equipment of each sampling time point, fusing the laser point cloud data and the image data to obtain point cloud map data, calculating the pose of each Marker, fusing the poses of the same ID number markers to obtain the average pose of each ID number Marker, calculating a coordinate system transformation matrix of each Marker relative to a reference ID number Marker, transforming the pose of each Marker to a coordinate system of the reference ID number Marker to obtain transformed poses of each Marker, generating Marker map data according to the transformed poses, conducting coordinate system transformation on the point cloud map data, and fusing the point cloud map data aligned by coordinate system transformation with Marker map data to obtain scene point cloud map data. The method can realize unified fusion of the Marker map and the laser point cloud map, and improve navigation accuracy and stability of the robot.
Inventors
- SHI JINZHU
- ZHANG TAO
- CHEN HAO
- LI JIANGTAO
Assignees
- 光象(北京)科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260211
Claims (10)
- 1. A mapping method based on laser point cloud and image data, comprising: Acquiring laser point cloud data acquired by a laser radar at each sampling time point in a sampling period, image data acquired by a camera at each sampling time point and angular velocity data and acceleration data acquired by an inertial measurement unit at each sampling time point, wherein the laser radar, the camera and the inertial measurement unit are all installed on the same mobile acquisition equipment; Fusing the laser point cloud data and the image data of the same sampling time point to obtain point cloud map data of each sampling time point; Determining the pose of the camera at each sampling time point based on the pose of the laser radar at each sampling time point and external parameter data of the laser radar and the camera; detecting markers in each image data, and calculating the pose of each Marker in each image data; Fusing the poses of the same ID number Marker in each image data by Markley method to obtain the average pose of each ID number Marker; Calculating a coordinate system transformation matrix of an initial coordinate system of each ID number Marker relative to a coordinate system of a reference ID number Marker based on the average pose of each ID number Marker and the average pose of the reference ID number Marker, and converting the poses of all markers in each image data to the coordinate system of the reference ID number Marker based on the coordinate system transformation matrix to obtain the converted poses of each Marker in each image data; generating Marker map data which corresponds to each image data and contains Marker pose based on the converted pose of each Marker in each image data; based on the coordinate system transformation matrix, carrying out coordinate system transformation on the point cloud map data of the same sampling time point corresponding to each image data to obtain point cloud map data after the coordinate system transformation alignment of each sampling time point; and fusing the point cloud map data after the coordinate system transformation alignment of each sampling time point with the corresponding Marker map data to obtain final scene point cloud map data of each sampling time point.
- 2. The mapping method based on laser point cloud and image data according to claim 1, wherein after calculating the pose of each Marker in each image data, the method further comprises: Converting the pose of each Marker in each image data to a global coordinate system at the initial moment; Removing markers with abnormal position fluctuation in each image data to obtain each filtered image data; Correspondingly, the pose of the same ID number Marker in each image data is fused by Markley method to obtain the average pose of each ID number Marker, which comprises the following steps: And fusing the poses corresponding to the same ID number Marker in the filtered image data by a Markley method to obtain the average pose of each ID number Marker.
- 3. The mapping method based on laser point cloud and image data according to claim 2, wherein removing markers with abnormal position fluctuation in each image data to obtain each filtered image data comprises: For the same ID number Marker, calculating the median absolute deviation of the pose of the same ID number Marker in each image data; Taking the same ID number Marker with the median absolute deviation of the pose of each image data higher than a preset absolute deviation threshold value as an abnormal Marker; And eliminating all abnormal markers in each image data to obtain each filtered image data.
- 4. The mapping method based on laser point cloud and image data according to claim 2, wherein the method for fusing the poses corresponding to the same ID number Marker in each filtered image data by Markley to obtain the average pose of each ID number Marker comprises: calculating the quaternion average value and the translation vector average value of the same ID number Marker in each image data after filtering; Constructing a weighting matrix according to the quaternion average value of all the same ID numbers Marker in each filtered image data; performing feature decomposition on the weighting matrix to determine the maximum feature in the weighting matrix; Taking the quaternion average value of the same ID number Marker corresponding to the maximum feature as a target quaternion average value; and respectively combining the target quaternion average value with the translation vector average value of each ID number Marker in the filtered image data to obtain the average pose of each ID number Marker.
- 5. The mapping method based on laser point cloud and image data according to claim 1, characterized in that before calculating a coordinate system transformation matrix of an initial coordinate system of each ID number Marker with respect to a coordinate system of a reference ID number Marker based on an average pose of each ID number Marker and an average pose of the reference ID number Marker, the method further comprises: judging whether a reference ID number Marker is specified; If no reference ID number Marker is specified, one ID number Marker is randomly specified as the reference ID number Marker.
- 6. The mapping method based on laser point cloud and image data according to claim 1, wherein calculating the pose of each Marker in each image data comprises: calculating the pose of each Marker in each image data relative to a camera of a corresponding sampling time point through a PnP algorithm; And calculating the pose of each Marker in the image data based on the pose of the camera at each sampling time point and the pose of each Marker in the image data relative to the camera at the corresponding sampling time point.
- 7. The mapping method based on laser point cloud and image data according to claim 1, wherein fusing the laser point cloud data and the image data at the same sampling time point to obtain point cloud map data of each sampling time point comprises: And fusing the laser point cloud data and the image data of the same sampling time point through a laser vision SLAM algorithm to obtain the point cloud map data of each sampling time point.
- 8. A mapping apparatus based on laser point cloud and image data, comprising: The acquisition unit is used for acquiring laser point cloud data acquired by the laser radar at each sampling time point in a sampling period, image data acquired by the camera at each sampling time point and angular velocity data and acceleration data acquired by the inertial measurement unit at each sampling time point, wherein the laser radar, the camera and the inertial measurement unit are all arranged on the same mobile acquisition equipment; the first fusion unit is used for fusing the laser point cloud data and the image data of the same sampling time point to obtain point cloud map data of each sampling time point; The determining unit is used for determining the pose of the camera at each sampling time point based on the pose of the laser radar at each sampling time point and the external parameter data of the laser radar and the camera; the detection and calculation unit is used for detecting markers in the image data and calculating the pose of each Marker in the image data; the second fusion unit is used for fusing the poses of the same ID number Marker in each image data by a Markley method to obtain the average pose of each ID number Marker; The computing and converting unit is used for computing a coordinate system transformation matrix of an initial coordinate system of each ID number Marker relative to a coordinate system of a reference ID number Marker based on the average pose of each ID number Marker and the average pose of the reference ID number Marker, and converting the poses of all markers in each image data into the coordinate system of the reference ID number Marker based on the coordinate system transformation matrix to obtain the converted poses of each Marker in each image data; The generating unit is used for generating Marker map data which corresponds to each image data and contains the Marker pose based on the converted pose of each Marker in each image data; The conversion unit is used for carrying out coordinate system transformation on the point cloud map data of the same sampling time point corresponding to each image data based on the coordinate system transformation matrix to obtain point cloud map data after the coordinate system transformation alignment of each sampling time point; and the third fusion unit is used for fusing the point cloud map data after the coordinate system transformation alignment of each sampling time point with the corresponding Marker map data to obtain final scene point cloud map data of each sampling time point.
- 9. An electronic device comprising a memory, a processor and a transceiver in communication with each other in sequence, wherein the memory is configured to store a computer program, the transceiver is configured to send and receive a message, and the processor is configured to read the computer program and execute the mapping method based on the laser point cloud and the image data according to any one of claims 1 to 7.
- 10. A computer program product comprising a computer program or instructions which, when executed by a computer, implement the mapping method based on laser point cloud and image data as claimed in any of claims 1 to 7.
Description
Mapping method, device, equipment and product based on laser point cloud and image data Technical Field The invention belongs to the technical field of multi-sensor data fusion, and particularly relates to a graph building method, device, equipment and product based on laser point cloud and image data. Background With the rapid development of robotics, the environmental awareness becomes an important sign of its level of intelligence. Currently, robot environmental awareness mainly depends on various sensors, such as lidar, cameras and inertial measurement units (Inertial Measurement Unit, IMU), etc. The sensors have advantages and disadvantages, the laser radar can provide high-precision three-dimensional space information, but the cost is high and is easily influenced by illumination conditions, the camera cost is low and can provide rich semantic information, but the sensor is limited by texture features and illumination conditions, positioning accuracy is difficult to ensure, and the IMU can provide continuous pose estimation and is easily influenced by accumulated errors. To overcome the limitations of single sensors, multi-sensor fusion techniques have been developed, such as laser-vision-inertial fusion methods, marker-based mapping methods, and the like. However, the existing laser-vision-inertia fusion method cannot unify multi-source information such as laser radar point cloud, inertia measurement unit and vision Marker into the same coordinate system, so that data of each sensor are processed under different coordinate systems, complementarity of the multi-source information cannot be fully utilized, and positioning and navigation accuracy and stability of the robot are affected. The fusion mechanism with the laser point cloud map is completely lacking based on the Marker map building method, an accurate transformation matrix between the Marker map and the point cloud map cannot be calculated, a unified scene map containing geometric information and semantic marks cannot be constructed, positioning navigation precision and stability of the robot are affected, and application effects of the robot in a complex environment are limited. Meanwhile, the Marker-based map construction method requires that markers must appear in the same image as other markers already in the map to be added to the map, and the co-occurrence constraint limits the flexibility of Marker map construction. Therefore, how to provide an effective solution to improve the flexibility of map construction and realize high-precision map fusion has become a challenge in the prior art. Disclosure of Invention The invention aims to provide a mapping method, device, equipment and product based on laser point cloud and image data, which are used for solving the problems in the prior art. In order to achieve the above purpose, the present invention adopts the following technical scheme: in a first aspect, the present invention provides a mapping method based on laser point cloud and image data, including: Acquiring laser point cloud data acquired by a laser radar at each sampling time point in a sampling period, image data acquired by a camera at each sampling time point and angular velocity data and acceleration data acquired by an inertial measurement unit at each sampling time point, wherein the laser radar, the camera and the inertial measurement unit are all installed on the same mobile acquisition equipment; Fusing the laser point cloud data and the image data of the same sampling time point to obtain point cloud map data of each sampling time point; Determining the pose of the camera at each sampling time point based on the pose of the laser radar at each sampling time point and external parameter data of the laser radar and the camera; detecting markers in each image data, and calculating the pose of each Marker in each image data; Fusing the poses of the same ID number Marker in each image data by Markley method to obtain the average pose of each ID number Marker; Calculating a coordinate system transformation matrix of an initial coordinate system of each ID number Marker relative to a coordinate system of a reference ID number Marker based on the average pose of each ID number Marker and the average pose of the reference ID number Marker, and converting the poses of all markers in each image data to the coordinate system of the reference ID number Marker based on the coordinate system transformation matrix to obtain the converted poses of each Marker in each image data; generating Marker map data which corresponds to each image data and contains Marker pose based on the converted pose of each Marker in each image data; based on the coordinate system transformation matrix, carrying out coordinate system transformation on the point cloud map data of the same sampling time point corresponding to each image data to obtain point cloud map data after the coordinate system transformation alignment of each sampling time point; and fusin