Search

CN-116863425-B - Multi-frame lane line point cloud rasterization method, device, equipment and medium

CN116863425BCN 116863425 BCN116863425 BCN 116863425BCN-116863425-B

Abstract

The invention provides a multi-frame lane line point cloud rasterization method, device, equipment and medium, wherein the method comprises the steps of obtaining a visual point cloud at the current moment; and the redundant visual points are removed through rasterization, the same number of visual point clouds are screened out through the fixed number of grids, and the redundant visual points at different times are removed. According to the invention, the lane line point clouds semantically separated by the vehicle-mounted camera are subjected to multi-frame splicing, historical visual information is accumulated and transformed to the current time, and the occupied grids are adopted to simplify redundant point clouds, so that the weight of a certain frame of error point clouds is reduced, enough visual lane line point clouds at any moment are ensured, redundant visual points at different moments are removed, and the positioning accuracy is higher and the robustness is stronger.

Inventors

  • ZHAO WEI
  • CAO CHUAN
  • TANG ZHAOFENG

Assignees

  • 重庆长安汽车股份有限公司

Dates

Publication Date
20260512
Application Date
20230710

Claims (14)

  1. 1. A multi-frame lane line point cloud rasterization method, the method comprising: Acquiring a visual point cloud at the current moment; converting the visual point cloud of the historical moment to the current moment is to calculate the transformation relation of the vehicle in a coordinate system through the wheel speed, the course angle, the instantaneous angular speed and the instantaneous linear speed of the vehicle body, and store the vehicle pose corresponding to the visual point cloud of each moment, so that the point cloud of the historical moment is projected to the current moment; The method comprises the steps of gridding down sampling, namely, gridding to remove redundant visual points at different moments, wherein the method comprises the steps of reading visual point cloud coordinates, calculating grid index point coordinates, establishing grids according to the coordinates, only reserving the visual point coordinates closest to a vehicle if the grid index points with the same coordinates appear, removing the rest same values, and reserving one visual point for each grid; Screening out the same number of vision point clouds through the fixed grid number, specifically counting the grid number, and discarding vision points which are more than a preset value from the vehicle body when the grid number is more than a given limit value, namely taking a front viewpoint according to the distance, wherein the preset value is not more than the diameter value of the maximum curvature of a target lane in an actual use scene; the given limit value is that a set number of grid points are reserved according to the sequence of the distances between the visual points and the vehicle body from small to large, and then the number of the visual point clouds is adjusted through setting the number of the fixed grids, so that the iteration times of the matching process are adjusted.
  2. 2. The multi-frame lane line point cloud rasterization method of claim 1, wherein a calculation formula for calculating grid index point coordinates is as follows: Grid_x, grid_y and grid_z respectively represent Grid index points of the visual points on the x axis, the y axis and the z axis, and scale represents a scale factor of the Grid size.
  3. 3. The multi-frame lane-line point cloud rasterization method of claim 2, further comprising calculating a distance from a grid index point to a coordinate system origin, i.e., a vehicle body center: distance=Grid_x * Grid_x+Grid_y * Grid_y+Grid_z * Grid_z。
  4. 4. The method for rasterizing the multi-frame lane line point cloud according to claim 1, wherein the step of obtaining the visual point cloud at the current moment is to obtain a curve segment of a preset distance area in front of the vehicle body at the current moment, and discretize the curve segment into the point cloud, and the preset distance area is set according to the capability of a adopted visual perception algorithm.
  5. 5. The method for rasterizing a multi-frame lane line point cloud as recited in claim 4, wherein the projection relation for projecting the history point cloud to the current time is: = * * Wherein, the The point cloud coordinates at the time t are represented, Representing the point cloud coordinates at time t-1, Indicating the kth visual point at time t, The change matrix of the kth visual point from the vehicle body coordinate system to the world coordinate system at the moment t.
  6. 6. Multi-frame lane line point cloud gridding device, its characterized in that includes: The acquisition module is used for acquiring the visual point cloud at the current moment; The conversion module is used for converting the visual point cloud at the historical moment to the current moment, specifically, calculating the transformation relation of the vehicle in the coordinate system through the wheel speed, the course angle, the instantaneous angular speed and the instantaneous linear speed of the vehicle body, and storing the vehicle body pose corresponding to the visual point cloud at each moment, so that the point cloud at the historical moment is projected to the current moment; The system comprises a rasterization module, a control module and a control module, wherein the rasterization module is used for rasterizing and eliminating redundant visual points and comprises the steps of reading visual point cloud coordinates, calculating grid index point coordinates and establishing grids according to the coordinates; The method comprises the steps of screening out vision point clouds with the same quantity through the fixed grid quantity, eliminating redundant vision points at different moments, specifically counting the grid quantity, discarding the vision points with the distance from a vehicle body being larger than a preset value, namely taking a front viewpoint according to the distance when the grid quantity is larger than a given limit value, wherein the preset value is not larger than the diameter value of the maximum curvature of a target lane in an actual use scene, the given limit value is that the preset quantity of grid points is reserved according to the sequence of the vision points with the distance from the vehicle body being smaller to larger, and then the quantity of the vision point clouds is adjusted through the setting of the fixed grid quantity, so that the iteration times of a matching process are adjusted.
  7. 7. The multi-frame lane line point cloud rasterizing apparatus of claim 6, wherein the given limit value is a set number of grid points reserved according to an order in which visual points are spaced from a vehicle body from a small distance to a large distance, and the number of visual point clouds is adjusted by setting the number of fixed grids, thereby adjusting the number of iterations of the matching process.
  8. 8. The multi-frame lane-line point cloud rasterizing apparatus of claim 7, wherein a calculation formula for calculating grid index point coordinates is as follows: Grid_x, grid_y and grid_z respectively represent Grid index points of the visual points on the x axis, the y axis and the z axis, and scale represents a scale factor of the Grid size.
  9. 9. The multi-frame lane-line point cloud rasterizing apparatus of claim 8, further comprising calculating a distance of a grid index point to a coordinate system origin, i.e., a vehicle body center: distance=Grid_x * Grid_x+Grid_y * Grid_y+Grid_z * Grid_z。
  10. 10. The multi-frame lane line point cloud rasterizing apparatus according to any one of claims 6 to 9, wherein the obtaining module obtains a visual point cloud at the current time by taking a curve segment of a preset distance area before a vehicle body at the current time, discretizing the curve segment into a point cloud, and setting the preset distance area according to the capability of a adopted visual perception algorithm.
  11. 11. The multi-frame lane line point cloud rasterizing apparatus according to any one of claims 6 to 9, wherein the conversion module calculates a transformation relationship of a vehicle in a coordinate system by using a vehicle-mounted combined inertial navigation system, and stores a vehicle body pose corresponding to a visual point cloud at each moment, so as to project a point cloud at a history moment to a current moment.
  12. 12. The multi-frame lane-line point cloud rasterizing apparatus of claim 11, wherein a projection relation for projecting the history point cloud to the current time is: = * * Wherein, the The point cloud coordinates at the time t are represented, Representing the point cloud coordinates at time t-1, The change matrix of the kth visual point from the vehicle body coordinate system to the world coordinate system at the moment t.
  13. 13. An electronic device comprising a processor and a memory storing a program, wherein the program comprises instructions that when executed by the processor cause the processor to perform the multi-frame lane-line point cloud rasterization method of any one of claims 1-5.
  14. 14. A non-transitory computer-readable storage medium storing computer instructions, wherein the computer instructions are for causing the computer to perform the multi-frame lane line point cloud rasterization method of any one of claims 1-5.

Description

Multi-frame lane line point cloud rasterization method, device, equipment and medium Technical Field The invention belongs to the automatic driving technology of automobiles, and particularly relates to a high-precision positioning technology for automatic driving of automobiles. Background Lane line information is one of the most important information for realizing high-precision positioning in automobile automatic driving. The lane line recognized by the vehicle-mounted camera is matched with the lane line of the high-precision map, so that the high-precision positioning of the vehicle can be realized. The vehicle-mounted camera has the advantages of strong perception instantaneity and high cost performance, and has the defect of being easily influenced by factors such as illumination, weather, unclear road surface and the like. Therefore, based on the lane line identified by the vehicle-mounted camera, the conventional lane line identification algorithm is combined, and the false detection is easy to be missed when the vehicle encounters complex road conditions or severe weather, so that the vehicle positioning is inconsistent with the actual positioning. In view of the above shortcomings, some improved technical solutions have been made by the researchers in the field, for example, a lane line fusion method based on intelligent cameras and high-precision map positioning is reported in literature. The vehicle-mounted camera obtains lane line information through semantic segmentation and fits the lane line information into a cubic curve. And dispersing the cubic curve into point clouds according to a certain interval, and then matching the point clouds with corresponding map point clouds by searching so as to calibrate the current vehicle position coordinates. The method only uses the lane line information acquired by the camera at the current moment, and when false detection is missed, the positioning generates larger deviation. There is also proposed a point cloud rasterization method, which establishes a grid according to point cloud coordinates and calculates grid center point coordinates. All points within the grid are represented approximately within each grid by the center point of the current grid. The method approximately represents the center point to all points in the grid, and the precision of the coordinate points is reduced to a certain extent. Therefore, although the technology has certain progress, the existing recognition algorithm of the vision camera based on semantic segmentation has larger influence on weather, illumination and the like, and the precision can not achieve hundred percent accurate recognition, so that the positioning is performed by adopting a single lane line recognition result, the defect of weak robustness of the positioning result is caused, the problems that the processing quantity, quality and precision of the vision point cloud can not be considered, and the like still have great room for improvement. Disclosure of Invention Aiming at the problems existing in the prior art, in order to improve the accuracy and the robustness of automobile positioning, the invention provides a multi-frame lane line point cloud rasterization method, device, equipment and medium, which are used for carrying out multi-frame splicing on lane line point clouds semantically separated by a vehicle-mounted camera, accumulating and transforming historical visual information to the current time, simplifying redundant point clouds by adopting occupied grids, and enabling the point clouds to be used for positioning a vehicle with a high-precision map. The technical scheme of the invention is as follows: The invention provides a multi-frame lane line point cloud rasterization method, which comprises the steps of firstly obtaining visual point clouds at the current moment, then converting the visual point clouds at the historical moment to the current moment, finally rasterizing to remove redundant visual points, screening out the same number of visual point clouds through the fixed grid number, and removing redundant visual points at different moments. Further preferably, the rasterizing removing redundant visual points comprises reading the cloud coordinates of the visual points, calculating the coordinates of grid index points, and establishing a grid according to the coordinates, wherein if the grid index points with the same coordinates appear, the condition that a plurality of visual points exist in the grid is indicated, only the coordinates of the visual points closest to the vehicle are reserved, and the rest of the same values are removed, so that each grid only reserves one visual point. Further preferably, the screening of the same number of visual point clouds by the fixed number of grids is to count the number of grids, and when the number of grids is greater than a given limit value, discarding visual points which are greater than a preset value from the vehicle body, namely taki