Search

CN-116311127-B - Road boundary detection method, computer equipment, readable storage medium and motor vehicle

CN116311127BCN 116311127 BCN116311127 BCN 116311127BCN-116311127-B

Abstract

The invention discloses a road boundary detection method, a computer device, a readable storage medium and a motor vehicle, which relate to the technical field of automatic driving, wherein the vehicle running the automatic driving recognizes and detects the boundary of a road through the road boundary detection method; classifying the original point cloud according to the types of the obstacles, extracting the point cloud containing the labels of the road boundaries, dividing the extracted point cloud into left and right point clouds, filtering the left and right point clouds to obtain left and right road boundary candidate point clouds, and fitting the left and right road boundary candidate point clouds to obtain the road boundaries. The road boundary detection method provided by the invention has the advantages of high detection precision and low false detection rate.

Inventors

  • HUA ZHI
  • Han Jineng
  • Hu laifeng

Assignees

  • 浙江零跑科技股份有限公司

Dates

Publication Date
20260508
Application Date
20230310

Claims (11)

  1. 1. A road boundary detection method by which a vehicle operating an autonomous drive recognizes and detects a boundary of a road, characterized by comprising the steps of: preprocessing the original point cloud to form a two-dimensional array; semantic segmentation, namely classifying an original point cloud according to obstacle categories to obtain different semantic tags, and extracting the point cloud containing the tags of the road boundaries; dividing the extracted point cloud into a left point cloud and a right point cloud; Filtering, namely filtering the left point cloud and the right point cloud to obtain left boundary candidate point cloud and right boundary candidate point cloud; fitting, namely fitting the left road boundary candidate point cloud and the right road boundary candidate point cloud to obtain a road boundary; Wherein the filtering step comprises the following sub-steps: Clustering the left point cloud and the right point cloud according to the following formula to obtain a left clustering point cloud and a right clustering point cloud: Wherein p i is the i-th point, p j is the j-th point, and d thr is a threshold; respectively calculating the mass center Y value of a point cloud cluster of which the left clustering point cloud and the right clustering point cloud are nearest to the drivable road surface respectively; Screening the left point cloud and the right point cloud according to the following formula to obtain a left roadside boundary region point cloud and a right roadside boundary region point cloud: Wherein Y right is a right centroid Y value, Y left is a left centroid Y value, P r_region is a right roadside boundary region point cloud, and P l_region is a left roadside boundary region point cloud; extracting left boundary seed points and right boundary seed points from the left boundary region point cloud and the right boundary region point cloud through distance filtering; and respectively performing curvature filtering on the left roadside boundary seed points and the right roadside boundary seed points to obtain left roadside boundary candidate point clouds and right roadside boundary candidate point clouds.
  2. 2. The road boundary detection method according to claim 1, wherein the preprocessing step includes the sub-steps of: removing point cloud data with any one channel value of X, Y, Z coordinate channels in the original point cloud as an NaN invalid point; Converting the residual point cloud data into an N multiplied by 4 two-dimensional array, wherein each row of data of the two-dimensional array is Ri (x, y, z, intensity) and comprises coordinate information of the point cloud data and reflectivity of each point, and the column number of the two-dimensional array is the number of points of the point cloud of each frame.
  3. 3. The road boundary detection method according to claim 2, wherein the semantic segmentation step includes the sub-steps of: Dividing the two-dimensional array through a convolutional neural network, and setting a corresponding semantic tag for each point in the point cloud; Classifying the point cloud data with the semantic tags through a semantic segmentation network; and extracting the point cloud corresponding to the semantic label containing the road boundary.
  4. 4. A road boundary detection method according to claim 3, wherein the corner cutting step comprises the sub-steps of: Dividing the extracted point cloud into a plurality of sector grids along the circumferential direction, wherein the central angle of each sector grid is 1 degree; counting the number of points in each sector grid, storing the angle position marks of the sector grids with the number of 0, marking the sector grids with the number of 0 as 1, and marking the other sector grids as 0; Performing median filtering on all labels to obtain sector grid arrays, and storing a label value and the angle position of the sector grid in each sector grid array; Ordering the angular positions of all the fan-shaped grid arrays marked as 1, selecting an intermediate value as a dividing angle of the road, and if the angular position of the fan-shaped grid arrays marked as 1 is 0, the dividing angle is 0; the left point cloud and the right point cloud are divided according to the following formula: where pt is the point in the sector grid, pt.x is the x-coordinate of the point in the sector grid, pt.y is the y-coordinate of the point in the sector grid, P left is the left Bian Dian cloud, P right is the right point cloud α is the segmentation angle.
  5. 5. The road boundary detection method according to claim 1, wherein extracting left and right road boundary seed points from the left and right road boundary region point clouds comprises the steps of: Carrying out rasterization on the left roadside boundary region point cloud and the right roadside boundary region point cloud along the x-axis direction to obtain a left rasterization point cloud and a right rasterization point cloud; Deleting in each grid according to the coordinate of the point cloud in the Y-axis direction, selecting max (pt.y) in the right grid point cloud as a right roadside seed point, and selecting min (pt.y) in the left grid point cloud as a left roadside seed point.
  6. 6. The road boundary detection method according to claim 1, wherein in the fitting step, the furthest distances of the left and right road boundary candidate point clouds in the vehicle advancing direction x axis, respectively, are calculated, and if the furthest distances are smaller than a threshold value, fitting is performed using a first order polynomial, and if the furthest distances are larger than a distance threshold value, fitting is performed using a second order polynomial.
  7. 7. The method of claim 6, wherein fitting the first order polynomial comprises the steps of: Randomly selecting two points from the left road boundary candidate point cloud and the right road boundary candidate point cloud, and calculating a linear model y=a 0 x+b 0 ; respectively forming a matrix X 1 by the pt.x coordinates and a matrix Y 1 by the pt.y coordinates of the selected point cloud; Calculating a model parameter matrix M 1 =[a 0 , b 0 according to a formula M 1 =X 1inv *Y 1 , wherein X 1inv is an inverse matrix of X 1 ; the number of inner points of the linear model is counted through the model parameter matrix M 1 and the residual error threshold T two , Iterating the linear model to obtain a model with the largest number of inner points of the linear model as an optimal model linear model M 1best ; the road boundary is obtained by the optimal straight line model M 1best .
  8. 8. The method of claim 6, wherein fitting the quadratic polynomial comprises the steps of: randomly selecting two or three points from the left road boundary candidate point cloud and the right road boundary candidate point cloud, and calculating a parabolic model y=ax 2 +bx+c; respectively forming a matrix X 2 by the pt.x coordinates and a matrix Y 2 by the pt.y coordinates of the selected point cloud; Calculating a model parameter matrix M 2 = [ a, b, c ] according to a formula M 2 =X 2inv *Y 2 , wherein X 2inv is an inverse matrix of X 2 ; The number of interior points of the parabolic model is counted through the model parameter matrix M 2 and the residual error threshold T two , Iterating the parabolic model to obtain a model with the largest number of inner points of the parabolic model as an optimal model parabolic model M 2best ; The road boundary is obtained by the best parabolic model M 2best .
  9. 9. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the road boundary detection method of any one of claims 1 to 8 when executing the computer program.
  10. 10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the road-boundary detection method of any one of claims 1 to 8.
  11. 11. A motor vehicle having an autopilot function, the motor vehicle operating the autopilot function to detect a road boundary by the road boundary detection method according to any one of claims 1 to 8; Or the motor vehicle has a computer device according to claim 9; Or the motor vehicle has a computer-readable storage medium as claimed in claim 10, which when executed by a processor implements the road-boundary detection method as claimed in any one of claims 1 to 8.

Description

Road boundary detection method, computer equipment, readable storage medium and motor vehicle Technical Field The invention relates to the technical field of automatic driving, in particular to a road boundary detection method, computer equipment, a readable storage medium and a motor vehicle. Background In the field of automatic driving, the sensing scheme based on the 3D laser radar can realize high-precision environment scene detection, and not only can detect dynamic objects in the environment, but also can detect static objects in the environment. Road boundaries are a necessary item of information for automatically driving a vehicle, and are important for determining a drivable area of the vehicle and driving safety of the vehicle. In the running process of an automatic driving vehicle, road boundary information is often required to be relied on. The road boundary information can not only generate a drivable area but also assist in vehicle positioning. Under normal driving conditions, the road boundary is insurmountable for the vehicle, and a path planning interval can be provided for vehicle positioning and navigation, so that accurate road boundary detection is very significant for automatic driving and auxiliary driving of the vehicle. The existing laser radar-based road boundary detection method has good detection effect on some structured road boundaries, but is difficult to detect on unstructured road boundaries, such as unstructured scenes of vegetation covered road edges, expressway scenes, fences in the middle of the road and the like. Meanwhile, in the existing method, the X coordinate axis of the laser radar is used as a boundary for dividing the left and right roads, and for some curved roads, wrong division can occur, so that the accuracy of division is reduced. In addition, the existing road boundary fitting algorithm extracts road boundary points through a single quadratic parabolic fitting algorithm. Single curve fitting has drawbacks especially in heavily occluded scenes. Disclosure of Invention The present invention aims to solve one of the technical problems in the related art to a certain extent. Therefore, the road boundary detection method provided by the invention has the advantages of high detection precision and low false detection rate. In order to achieve the above purpose, the invention adopts the following technical scheme: a road boundary detection method by which a vehicle running autonomous driving recognizes and detects a boundary of a road, the road boundary detection method comprising the steps of: preprocessing the original point cloud to form a two-dimensional array; semantic segmentation, namely classifying an original point cloud according to obstacle categories to obtain different semantic tags, and extracting the point cloud containing the tags of the road boundaries; dividing the extracted point cloud into a left point cloud and a right point cloud; Filtering, namely filtering the left point cloud and the right point cloud to obtain left boundary candidate point cloud and right boundary candidate point cloud; Fitting, namely fitting the left road boundary candidate point cloud and the right road boundary candidate point cloud to obtain a road boundary. According to the technical scheme provided by the invention, based on the laser radar point cloud semantics, the method for the road boundary of the harness information of the laser radar is not dependent, and compared with a camera, the three-dimensional structure information of a scene object can be well detected. Different categories in the laser radar point cloud are identified through deep learning training semantic segmentation models, then the road boundary information is extracted from the category point cloud containing the road boundary, the interference information is filtered out rapidly, the method has good anti-interference capability, the shielding object of the road boundary can be well distinguished, the problem that the road boundary is shielded by vehicles and pedestrians is effectively solved, and the accuracy and the stability of road boundary detection are guaranteed. Even if the vehicle shields serious road boundaries, the detection can be distinguished and can be applied to mechanical laser radar and solid-state laser radar. Optionally, the preprocessing step includes the following sub-steps: removing point cloud data with any one channel value of X, Y, Z coordinate channels in the original point cloud as an NaN invalid point; Converting the residual point cloud data into an N multiplied by 4 two-dimensional array, wherein each row of data of the two-dimensional array is Ri (x, y, z, intensity) and comprises coordinate information of the point cloud data and reflectivity of each point, and the column number of the two-dimensional array is the number of points of the point cloud of each frame. In the prior art, although preprocessing of point cloud data also exists, naN value points in the