CN-122023530-A - Off-road gradient prediction method and device based on multi-modal data fusion
Abstract
The application provides a multi-modal data fusion-based off-road slope prediction method and device, wherein the method comprises the steps of dividing an RGB image by utilizing an image segmentation model to obtain a binary mask image of a road area, projecting point cloud data to an RGB image coordinate system to obtain a point cloud image, calculating a space feature vector of each pixel of the point cloud image, determining an effective area of a self-vehicle running direction based on the binary mask image and the space feature vector of each pixel, determining an average value and a height average value of the self-vehicle running direction of each line of the point cloud image based on the effective area of the self-vehicle running direction, fitting to obtain a height average value curve taking the average value of the self-vehicle running direction as an independent variable based on the average value and the height average value of each line, uniformly sampling on the height average value curve to obtain a plurality of sampling points, and calculating slope values of the self-vehicle running direction based on two adjacent sampling points. The method improves the precision of gradient calculation and the stability of errors in complex off-road scenes.
Inventors
- LI ZHIWEI
- ZHOU YANG
- ZHANG YUQIAN
- TAN QIFAN
- WU ZIHAO
- ZHANG WEIZHENG
- SHEN TIANYU
- WANG YADONG
- WANG LI
Assignees
- 北京化工大学
Dates
- Publication Date
- 20260512
- Application Date
- 20260205
Claims (10)
- 1. The off-road gradient prediction method based on multi-modal data fusion is characterized by comprising the following steps of: acquiring RGB image and point cloud data of a self-vehicle at the current moment acquired on an off-road; Dividing the RGB image by using a pre-trained image division model to obtain a binary mask map of a road area; projecting the point cloud data to an RGB image coordinate system to obtain a point cloud image, and calculating the space feature vector of each pixel of the point cloud image; Determining an effective area of the self-vehicle running direction based on the binary mask map of the road area and the space feature vector of each pixel of the point cloud image; fitting to obtain a height average value curve taking the average value of the self-vehicle running direction as an independent variable based on the average value and the height average value of the self-vehicle running direction of each line; And calculating the gradient value of the self-vehicle running direction based on two adjacent sampling points.
- 2. The method of claim 1, wherein the binary mask map of the road area is represented as: When pixels of a binary mask pattern Pixels for the travelable region Mask value of (2) If not, the first part of the first part is connected with the second part, 。
- 3. The method of claim 2, wherein projecting the point cloud data onto an RGB image coordinate system to obtain a point cloud image, calculating a spatial feature vector for each pixel of the point cloud image, comprises: Performing homogeneous treatment on each point of the point cloud data to obtain multiple homogeneous points, stacking all homogeneous points according to columns to obtain a matrix : Homogeneous external parameter matrix from radar coordinate system to camera coordinate system : Wherein, the For a rotation matrix of the radar coordinate system to the camera coordinate system, A translation vector from a radar coordinate system to a camera coordinate system; Using homogeneous extrinsic matrices Pair matrix Performing external parameter transformation to obtain matrix under camera coordinate system : Using correction matrices Pair matrix Applying correction to obtain matrix : Using projection matrices Matrix is formed Projecting onto RGB image coordinate system to obtain matrix : Wherein the matrix Is the ith column of (2) Respectively representing homogeneous components of projection points in an RGB coordinate system; Calculating pixel coordinates of the ith point : Coordinates of two-dimensional pixels Coded as one-dimensional index : Wherein, the The width of the point cloud image is the width of the point cloud image; For the same one-dimensional index Only the point with the minimum depth is reserved, so that a point cloud image is obtained; Setting pixels of a point cloud image Is the space feature vector of (a) , wherein, Is a pixel An x component of the projection point under a radar coordinate system; Is a pixel The y component of the projection point under the radar coordinate system is located; Is a pixel The z component of the projected point in the radar coordinate system.
- 4. The method of claim 3, wherein determining an effective area from a direction of travel of the vehicle based on the binary mask map of the road area and the spatial feature of each pixel comprises: effective area of the direction of travel of a motor vehicle The method comprises the following steps: from the effective area Extracting the height values of all pixels, and determining the minimum height value from the extracted height values ; For the effective area Height values of spatial feature vectors of (a) Normalization processing is carried out to obtain a normalized height value : 。
- 5. The method of claim 4, wherein determining the average value and the height average value of the direction of travel of the vehicle for each row of the point cloud image based on the effective area of the direction of travel of the vehicle, comprises: acquiring an active point set for each line of a point cloud image : Calculating the average value of the running direction of the bicycle of each row : Wherein, the Representing the number of active point sets; Calculating the height average value of each row : 。
- 6. The method according to claim 1, wherein the fitting to obtain the height average curve using the average value of the traveling direction of the vehicle as the independent variable based on the average value of the traveling direction of the vehicle and the height average value of the traveling direction of the vehicle comprises: Setting a height average value curve taking an average value of the driving direction of the vehicle as an independent variable as follows: Wherein, the Is the average value of the running direction of the bicycle, As a mean value of the height of the steel plate, Are all coefficients; Fitting to obtain coefficients based on the average value and the height average value of the running directions of the self-vehicles of each row 。
- 7. The method according to claim 1, wherein the method further comprises: calculating a gradient angle corresponding to each gradient value to obtain a gradient angle sequence; Constructing a window with a preset window length by taking the kth gradient angle in the gradient angle sequence as a starting point ; Window is opened Is determined as the k candidate statistic; sequencing the candidate statistics from big to small to obtain a candidate statistics sequence; Selecting a pre-preset number of candidate statistics from the candidate statistics sequence, and calculating the average value of the pre-preset number of candidate statistics as a gradient angle estimated value 。
- 8. The method according to claim 1, wherein the method further comprises: attitude pitch angle acquired from current moment acquired by IMU (inertial measurement Unit) on vehicle ; Compensating the gradient angle estimated value to obtain a compensated gradient angle : 。
- 9. An off-road grade prediction device based on multi-modal data fusion, comprising: the acquisition unit is used for acquiring RGB image and point cloud data of the self-vehicle at the current moment acquired on the off-road; the image segmentation unit is used for segmenting the RGB image by utilizing a pre-trained image segmentation model to obtain a binary mask map of the road area; the first computing unit is used for projecting the point cloud data to an RGB image coordinate system to obtain a point cloud image, and computing the space feature vector of each pixel of the point cloud image; a determining unit for determining an effective area of the self-vehicle traveling direction based on the binary mask map of the road area and the spatial feature vector of each pixel of the point cloud image; The fitting unit is used for fitting to obtain a height average value curve taking the average value of the self-vehicle running direction as an independent variable based on the average value and the height average value of the self-vehicle running direction of each row; the second calculation unit is used for uniformly sampling on the height average value curve to obtain a plurality of sampling points, and calculating the gradient value of the self-vehicle running direction based on two adjacent sampling points.
- 10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method according to any one of claims 1-8 when executing the computer program.
Description
Off-road gradient prediction method and device based on multi-modal data fusion Technical Field The application relates to the technical field of artificial intelligence, in particular to a method and a device for predicting the gradient of an off-road based on multi-modal data fusion. Background The existing road gradient prediction method is mostly aimed at the situation of road surface regularity and uses traditional accelerometers and the like to calculate gradient, and a multi-mode information fusion calculation strategy is not yet available to fully cope with the requirements of real-time accurate perception and prediction in a complex environment. In practical application, problems of complex road surface type, abrupt change of the slope of the terrain and dense natural obstacles occur, and the characteristics provide extremely high requirements for the slope sensing task. The traditional single sensor sensing scheme is easy to fail in the scenes of dust shielding, illumination mutation and the like, and the gradient estimation method based on the pre-stored map cannot cope with the map-free and unstructured terrain features in the off-road environment. These problems lead to a perceived model with lower prediction accuracy and limited recognition capability in the face of complex dynamic scenarios, failing to meet the actual requirements. Therefore, how to accurately sense the road surface state and recognize the gradient change in real time in the environment of a complex off-road type road becomes a difficult problem in the current technology of off-road intelligent equipment. Disclosure of Invention In view of the above, the present application provides a method and apparatus for predicting off-road gradient based on multi-modal data fusion, so as to solve the above technical problems. In a first aspect, an embodiment of the present application provides a method for predicting an off-road gradient based on multi-modal data fusion, including: acquiring RGB image and point cloud data of a self-vehicle at the current moment acquired on an off-road; Dividing the RGB image by using a pre-trained image division model to obtain a binary mask map of a road area; projecting the point cloud data to an RGB image coordinate system to obtain a point cloud image, and calculating the space feature vector of each pixel of the point cloud image; Determining an effective area of the self-vehicle running direction based on the binary mask map of the road area and the space feature vector of each pixel of the point cloud image; fitting to obtain a height average value curve taking the average value of the self-vehicle running direction as an independent variable based on the average value and the height average value of the self-vehicle running direction of each line; And calculating the gradient value of the self-vehicle running direction based on two adjacent sampling points. In one possible implementation, the binary mask map of the road area is represented as: When pixels of a binary mask pattern Pixels for the travelable regionMask value of (2)If not, the first part of the first part is connected with the second part,。 In one possible implementation, the method includes projecting the point cloud data to an RGB image coordinate system to obtain a point cloud image, calculating a spatial feature vector of each pixel of the point cloud image, and the method includes: Performing homogeneous treatment on each point of the point cloud data to obtain multiple homogeneous points, stacking all homogeneous points according to columns to obtain a matrix : Homogeneous external parameter matrix from radar coordinate system to camera coordinate system: Wherein, the For a rotation matrix of the radar coordinate system to the camera coordinate system,A translation vector from a radar coordinate system to a camera coordinate system; Using homogeneous extrinsic matrices Pair matrixPerforming external parameter transformation to obtain matrix under camera coordinate system: Using correction matricesPair matrixApplying correction to obtain matrix: Using projection matricesMatrix is formedProjecting onto RGB image coordinate system to obtain matrix: Wherein the matrixIs the ith column of (2)Respectively representing homogeneous components of projection points in an RGB coordinate system; Calculating pixel coordinates of the ith point : Coordinates of two-dimensional pixelsCoded as one-dimensional index: Wherein, the The width of the point cloud image is the width of the point cloud image; For the same one-dimensional index Only the point with the minimum depth is reserved, so that a point cloud image is obtained; Setting pixels of a point cloud image Is the space feature vector of (a), wherein,Is a pixelAn x component of the projection point under a radar coordinate system; Is a pixel The y component of the projection point under the radar coordinate system is located; Is a pixel The z component of the projected point in the radar