Search

CN-121982112-A - Automatic calibration method for automatic driving fusion perception

CN121982112ACN 121982112 ACN121982112 ACN 121982112ACN-121982112-A

Abstract

The invention provides an automatic driving fusion perception automatic calibration method which comprises an internal reference calibration step, a data acquisition step, a data preprocessing step, a feature extraction step, a data fusion step, a calibration algorithm step and a calibration result verification step, wherein the data fusion step adopts a feature fusion method, the method comprises the steps of extracting features of data of each sensor to obtain feature vectors, splicing the feature vectors to form a combined feature vector, and processing and fusing the combined feature vectors by using a deep learning model to obtain a fused feature representation. The data fusion step of the invention adopts a characteristic fusion method to ensure that the automatic driving fusion perception is convenient for automatic calibration.

Inventors

  • QIU JUN

Assignees

  • 盐田港东区国际集装箱码头有限公司

Dates

Publication Date
20260505
Application Date
20251224
Priority Date
20250709

Claims (10)

  1. 1. An automatic driving fusion perception automatic calibration method is characterized by comprising the following steps: s1, an internal reference calibration step; S2, data acquisition; s3, data preprocessing; S4, a feature extraction step; S5, a data fusion step, wherein a feature fusion method is adopted in the step, and the method comprises the following steps: S5-1, extracting characteristics of data of each sensor to obtain characteristic vectors; s5-2, splicing the feature vectors to form a joint feature vector; S5-3, processing and fusing the combined feature vectors by using a deep learning model to obtain fused feature representation; s6, calibrating an algorithm step; s7, verifying the calibration result.
  2. 2. The automatic calibration method for the automatic driving fusion awareness according to claim 1, wherein the step of S6 of the calibration algorithm comprises the following steps: S6-1, denoising the point cloud, namely removing outliers with insufficient neighborhood density based on the mean value and standard deviation dynamics of the neighborhood distance; s6-2, enhancing the image; s6-3, nonlinear iteration step.
  3. 3. The automatic calibration method for automatic driving fusion perception according to claim 2, wherein in the step of S6-1 point cloud denoising, a threshold range is set according to the point cloud coordinate attribute by setting a searching radius and a minimum point threshold in a neighborhood, target area data is directly intercepted, and if the neighborhood average distance of a certain point exceeds 1-3 times of the global average value by standard deviation, the outlier is determined to be ‌.
  4. 4. The method for automatically calibrating the fusion awareness of the automatic driving according to claim 2, wherein the step of enhancing the S6-2 image comprises the following steps: S6-2-1 image defogging, wherein the defogging step is based on ‌ atmospheric scattering model ‌ ‌, and the foggy day imaging process of the model is as follows: I【x】=J【x】t【x】+A【1−t【x】】; wherein I [ x ] is an observed hazy image; j [ x ] ‌ is a clear image without fog to be recovered; t [ x ] is transmittance; A ‌ atmospheric light value; the defogging algorithm reversely pushes an original defogging image J [ x ] ‌ by estimating ‌ transmittance t [ x ] and ‌ atmospheric light value A ‌; The image enhancement method in this step includes: Retinex algorithm ‌, by separating the brightness and reflection components of the image, enhancing the local contrast, indirectly eliminating the demisting effect ‌; A dark channel priori ‌, namely directly estimating the transmissivity and the atmospheric light by utilizing the statistical rule of at least one color channel and low brightness value in a natural scene; Physical model optimization ‌ -constructing an energy function through constraint conditions, and minimizing errors to recover a clear image.
  5. 5. The method for automatically calibrating the fusion awareness of automatic driving according to claim 4, wherein the step of enhancing the S6-2 image further comprises the steps of: S6-2-2 image rain removing step, wherein the step is to build the following model: I【x】=B【x】+R【x】; i [ x ] ‌ is a rain-containing image; b [ x ] ‌ refers to the background image to be restored; ‌ R [ x ] refers to the moire component; The image enhancement method in this step includes: the method comprises designing a band-stop filter to inhibit rain line signals by utilizing the high-frequency characteristic of rain lines in the frequency domain; the sparse representation method ‌ is based on sparsity difference of rain streaks and the background under function operation; the motion priori method ‌ aims at the track characteristics of dynamic raindrops and removes the rain line motion blur ‌ by combining an optical flow method.
  6. 6. The method for automatically calibrating the fusion awareness of the automatic driving according to claim 2, wherein the step of S6-3 nonlinear iteration comprises the following steps: s6-3-1, calculating a jacobian matrix of the error function; s6-3-2 algorithm is solved, wherein a linear equation set is used for updating parameters in the step, and the linear equation set consists of a jacobian matrix of an error function and parameter updating quantity; the S6-3-3 algorithm evaluates the new error value and adjusts the iteration step according to the reduction of the error.
  7. 7. The automatic driving fusion perception calibration method according to any one of claims 1 to 6, wherein in the step S1 internal reference calibration, a white light surface 1 meter by 1 meter calibration plate is adopted, the automatic driving fusion perception automatic calibration is carried out on a calibration site with the length of 20 meters and the width of 10 meters, and each sensor is subjected to independent internal reference calibration, so that accurate reference parameters are obtained.
  8. 8. The automatic calibration method for automatic driving fusion awareness according to any one of claims 1 to 6, wherein time synchronization and spatial synchronization between sensors are required to be ensured in the step of S2 data acquisition, the time synchronization is realized through a unified time stamp, and the spatial synchronization converts and unifies coordinate systems of different sensors.
  9. 9. The automatic calibration method for automatic driving fusion awareness according to any one of claims 1 to 6, wherein the step of preprocessing the S3 data comprises the steps of: s3-1, denoising the data, wherein noise and abnormal values in the data are removed; s3-2, data filtering, namely smoothing the data; And S3-3, enhancing data, namely enhancing the self-adaption capability of the model by increasing the data quantity and the diversity.
  10. 10. The automatic calibration method for automatic driving fusion awareness according to any one of claims 1 to 6, wherein the step of extracting the S4 features includes: Extracting camera data, and extracting features such as edges, corner points, textures and the like by using image processing and computer vision technology; extracting laser radar data, and extracting characteristics such as point cloud density, normal, curvature and the like by using point cloud data processing and a three-dimensional vision technology; Millimeter wave radar data extraction, which uses signal processing and target detection techniques to extract speed, distance, azimuth and other features.

Description

Automatic calibration method for automatic driving fusion perception Technical Field The invention relates to an automatic calibration method for automatic driving fusion perception. Background In an autopilot system, the accuracy of sensor calibration is directly related to the perception capability and decision level of the vehicle. Traditional calibration methods rely mainly on manual operations, using specific sites, purpose-built markers, measuring instruments and tools, etc. However, these methods are not only time-consuming and laborious, but also susceptible to environmental disturbances, personnel experience, etc., leading to inaccuracy in the calibration results. Along with the continuous development of computer vision, laser radar, millimeter wave radar and the like, a fusion perception technology of various sensors is designed, and the automatic calibration method based on fusion perception is designed, so that the high-precision automatic calibration is realized by fusing the data of the various sensors. The technical method can obviously improve the calibration efficiency, reduce human errors and provide powerful guarantee for automatic driving. The invention discloses a calibration method and device for a road-vehicle fusion perception laser radar and a GPS (global positioning system) by an invention authority bulletin number CN 114755662B, which solves the problem that an automatic driving vehicle cannot provide accurate geographic environment data and target position information in the prior art. The method is based on mapping of vehicle-mounted laser radar point cloud coordinates under a GPS global coordinate system and mapping of road side laser radar point cloud coordinates under the GPS global coordinate system, realizes mapping of vehicle-end sensing data and road side sensing data to the GPS global coordinate system at the same time, and realizes calibration of road-vehicle fusion sensing data. This is very complicated to do. Disclosure of Invention The invention aims to provide a concise automatic driving fusion sensing automatic calibration method. The technical scheme adopted by the invention for achieving the technical purpose is that the automatic driving fusion perception automatic calibration method comprises the following steps: s1, an internal reference calibration step; S2, data acquisition; s3, data preprocessing; S4, a feature extraction step; S5, a data fusion step, wherein a feature fusion method is adopted in the step, and the method comprises the following steps: S5-1, extracting characteristics of data of each sensor to obtain characteristic vectors; s5-2, splicing the feature vectors to form a joint feature vector; S5-3, processing and fusing the combined feature vectors by using a deep learning model to obtain fused feature representation; s6, calibrating an algorithm step; s7, verifying the calibration result. In the automatic driving fusion sensing automatic calibration method, the step S6 of the calibration algorithm comprises the following steps: S6-1, denoising the point cloud, namely removing outliers with insufficient neighborhood density based on the mean value and standard deviation dynamics of the neighborhood distance; s6-2, enhancing the image; s6-3, nonlinear iteration step. In the automatic driving fusion perception automatic calibration method, in the S6-1 point cloud denoising step, a searching radius and a minimum point threshold value in a neighborhood are set, a threshold range is set according to the point cloud coordinate attribute, target area data are directly intercepted, and if the neighborhood average distance of a certain point exceeds 1-3 times of the global average value by standard deviation, an outlier is judged to be ‌. Further, in the automatic driving fusion perception automatic calibration method, the step of S6-2 image enhancement comprises the following steps: s6-2-1, wherein the defogging step is carried out on the basis of a ‌ atmospheric scattering model ‌ ‌, and the model is used for carrying out a foggy weather imaging process: I【x】=J【x】t【x】+A【1−t【x】】; wherein I [ x ] is an observed hazy image; j [ x ] ‌ is a clear image without fog to be recovered; t [ x ] is transmittance; A ‌ atmospheric light value; the defogging algorithm reversely pushes an original defogging image J [ x ] ‌ by estimating ‌ transmittance t [ x ] and ‌ atmospheric light value A ‌; The image enhancement method in this step includes: Retinex algorithm ‌, by separating the brightness and reflection components of the image, enhancing the local contrast, indirectly eliminating the demisting effect ‌; A dark channel priori ‌, namely directly estimating the transmissivity and the atmospheric light by utilizing the statistical rule of at least one color channel and low brightness value in a natural scene; Physical model optimization ‌ -constructing an energy function through constraint conditions, and minimizing errors to recover a clear image. Further, in the