CN-121982521-A - Real-time point cloud fusion detection method, system, equipment and medium for black vehicle
Abstract
The invention discloses a real-time point cloud fusion detection method, a system, equipment and a medium for black vehicles, which comprise the following steps of 1, obtaining point cloud data through a laser radar, 2, scanning the point cloud data, marking continuous points cloud mass with reflection intensity of 0 as black vehicle identifications and storing the black vehicle identifications at the point cloud attribute positions, 3, carrying out clustering operation on a point cloud data set with the black vehicle identifications by using DBSCAN, carrying out imaging processing on the clustering operation result by using PointPillars to obtain a three-dimensional image model, forcibly outputting a black vehicle detection frame image according to the imaging processing result or improving confidence as black vehicle compensation confidence and preferentially adopting the model result, and when the black vehicle detection frame image does not belong to the two cases, carrying out weighted fusion on the operation result, and 4, fusing the operation result in the step 3 into a complete three-dimensional frame image. Therefore, the problem of missing detection is solved, and the detection rate and the detection efficiency are improved.
Inventors
- GUO XINYANG
- WU ENGUANG
- WANG HONGHUI
Assignees
- 城市之光(深圳)无人驾驶有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20251231
Claims (10)
- 1. The real-time point cloud fusion detection method for the black vehicle is characterized by comprising the following steps of: Step 1, obtaining point cloud data through a laser radar; Scanning point cloud data, marking continuous points cloud mass with the reflection intensity of 0 as black car marks, and storing the black car marks in the point cloud attribute positions; Step 3, bilinear compensation, wherein DBSCAN is used for carrying out clustering operation on a point cloud data set with a black car mark, PointPillars is used for carrying out imaging processing on a clustering operation result, and a three-dimensional image model is obtained; The imaging processing specifically comprises the following steps: When the clustering target void ratio of the clustering operation is more than 40%, carrying out forced output on the black car detection frame image by PointPillars on the clustering operation result so as to replace the broken clustering frame image obtained by clustering budget; Or when the cloud density of the clustering target point of the clustering operation is less than 30 points/m <3 >, the confidence coefficient of PointPillars is increased to be used as the black car compensation confidence coefficient, and the model result trained by the data is adopted preferentially; when the clustering targets of the clustering operation do not belong to the two conditions, PointPillars detection operation is not adjusted, and the clustering budget result and PointPillars detection operation result are subjected to weighted fusion; And 4, fusing the operation result in the step 3 into a complete three-dimensional frame image.
- 2. The method according to claim 1, further comprising training black car data, specifically, collecting a black car data set according to a public data set such as kitti data set, and setting a reflection intensity channel of the black car data set to 0, wherein the training black car data is used for generating the black car identifier in the step 2, and fusing the model result adopted in the step PointPillars.
- 3. The method for detecting real-time point cloud fusion for black vehicles according to claim 1, wherein in step 2, the continuity of the continuous points cloud mass is greater than 20 points.
- 4. The black-vehicle-oriented real-time point cloud fusion detection method of claim 1, wherein the DBSCAN is configured to be parallelized for deployment at CUDA Core and the PointPillars is configured to be deployed at Tensor Core.
- 5. The method for detecting the real-time point cloud fusion for the black vehicle according to claim 4, wherein the FP16 precision is used for optimizing DBSCAN to enable the DBSCAN to be self-adaptive to epsilon kernel functions (epsilon multiplied by 1.8 in sparse areas), the PF16 precision is used for optimizing PointPillars, and the black vehicle compensation confidence is set to be 1.5 times of initial confidence.
- 6. The method for detecting the real-time point cloud fusion for the black vehicle according to claim 4, wherein the set neighborhood radius epsilon of the DBSCAN clustering operation is 0.6m, and the minimum point number minPts is 10.
- 7. A real-time point cloud fusion detection system for black-oriented vehicles, for implementing the real-time point cloud fusion detection method according to any one of claims 1 to 6, comprising: The black car identification module is used for marking continuous points cloud mass with the reflection intensity of 0 as black car identifications according to the scanning result of the point cloud data and storing the black car identifications in the point cloud attribute positions; The DBSCAN clustering operation module is used for carrying out clustering operation on the point cloud data set with the black car mark; PointPillars a detection module, wherein the PointPillars detection module is used for carrying out imaging processing on the clustering operation result to obtain a three-dimensional image model; When the clustering target void ratio of the clustering operation is more than 40%, carrying out forced output on the black car detection frame image by PointPillars on the clustering operation result so as to replace the broken clustering frame image obtained by clustering budget; or when the cloud density of the clustering target point of the clustering operation is less than 30 points/m <3 >, the confidence coefficient of PointPillars is increased to be used as the black car compensation confidence coefficient, and the model result is adopted preferentially; when the clustering targets of the clustering operation do not belong to the two conditions, PointPillars detection operation is not adjusted, and the clustering budget result and PointPillars detection operation result are subjected to weighted fusion; And then fusing the operation results into a complete three-dimensional frame image.
- 8. The black-car-oriented real-time point cloud fusion detection system of claim 7, further comprising a black car data training module for training black car data, collecting a black car data set according to a public data set such as kitti data set, and setting a reflection intensity channel of the black car data set to 0, the training black car data being used for generating the black car identification, and fusing the model results adopted at PointPillars.
- 9. An electronic device comprising a memory storing executable program code, a processor coupled to the memory, the processor invoking the executable program code stored in the memory for performing the black-vehicle-oriented real-time point cloud fusion detection method of any of claims 1-7.
- 10. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute the black-vehicle-oriented real-time point cloud fusion detection method according to any one of claims 1 to 7.
Description
Real-time point cloud fusion detection method, system, equipment and medium for black vehicle Technical Field The invention relates to the technical field of automatic driving laser radar point cloud processing, in particular to a real-time point cloud fusion detection method, system, equipment and medium for black vehicles. Background With the rapid development of autopilot technology, lidar point cloud processing technology has become a core part of autopilot perception systems that identify and locate surrounding objects primarily by analyzing three-dimensional point cloud data generated by reflected laser beams. At present, a laser radar point cloud processing flow generally comprises preprocessing (such as denoising and filtering), feature extraction (such as edges and corner points) and final target detection and classification, and a parallel processing architecture of a DBSCAN clustering algorithm and a PointPillars deep learning model is constructed in the processing process. The DBSCAN is an algorithm based on density, clusters with any shape can be found through defining a density threshold value, noise points can be effectively identified, pointPillars is a 3D point cloud target detection model, disordered laser radar point cloud data are converted into ordered 'pseudo images' to be processed, detection speed can be improved, and target detection is attempted through cooperative calculation of DBSCAN clusters and the PointPillars model. However, this prior art still has the following problems: 1. in the preprocessing stage, when the intensity threshold is set to filter out environmental noise points, effective point clouds are mistakenly filtered out to cause target disappearance due to the fact that the reflection intensity of the black vehicle is approximately equal to 0, and serious breakage or black vehicle omission with the void ratio being more than 60% is generated due to insufficient density of residual point clouds in DBSCAN clustering operation. 2. The deep learning model has inherent defects that a PointPillars detection model based on point cloud needs massive marking data training, the marking cost of a black car sample is high, the FP32 precision is longer than 80ms on a Orin platform, real-time requirements cannot be met, the detection rate of unknown shape obstacles such as spills, shrubs and the like is less than 20%, and a static obstacle blind area exists. 3. The fusion scheme still has the problems that DBSCAN clustering is combined with a deep learning model, the accumulated time delay is larger than 120ms during serial processing of the DBSCAN clustering and the deep learning model, the serial architecture delay is serious, a special processing chain with the reflection intensity approximately equal to 0 is lacked, black vehicles are not compensated, a special acceleration chip is needed, and the deployment cost of vehicle-mounted hardware is high. Disclosure of Invention In order to overcome the defects of the prior art, one of the purposes of the invention is to provide a real-time point cloud fusion detection method for black vehicles. The invention discloses a real-time point cloud fusion detection method for a black vehicle, which is realized by adopting the following technical scheme that the method comprises the following steps: Step 1, obtaining point cloud data through a laser radar; Scanning point cloud data, marking continuous points cloud mass with the reflection intensity of 0 as black car marks, and storing the black car marks in the point cloud attribute positions; Step 3, bilinear compensation, wherein DBSCAN is used for carrying out clustering operation on a point cloud data set with a black car mark, PointPillars is used for carrying out imaging processing on a clustering operation result, and a three-dimensional image model is obtained; The imaging processing specifically comprises the following steps: When the clustering target void ratio of the clustering operation is more than 40%, carrying out forced output on the black car detection frame image by PointPillars on the clustering operation result so as to replace the broken clustering frame image obtained by clustering budget; Or when the cloud density of the clustering target point of the clustering operation is less than 30 points/m <3 >, the confidence coefficient of PointPillars is increased to be used as the black car compensation confidence coefficient, and the model result trained by the data is adopted preferentially; when the clustering targets of the clustering operation do not belong to the two conditions, PointPillars detection operation is not adjusted, and the clustering budget result and PointPillars detection operation result are subjected to weighted fusion; And 4, fusing the operation result in the step 3 into a complete three-dimensional frame image. Further, the real-time point cloud fusion detection method for the black vehicle further comprises training black v