CN-121982931-A - Intelligent rear traffic navigation early warning method, device, equipment and storage medium
Abstract
The application provides an intelligent rear traffic passing early warning method, device, equipment and storage medium, wherein vehicle-end multi-mode sensing data, road side blind area sensing data and cloud traffic information are acquired, data-level front fusion is conducted on the vehicle-end multi-mode sensing data to generate a three-dimensional target with a semantic tag and a motion track thereof, a plurality of probability tracks and corresponding behavior intentions of the three-dimensional target in the future are predicted by using a space-time diagram neural network model based on the motion track and scene context information, an initial risk matrix is calculated based on a preset running path and the probability tracks of a vehicle, a comprehensive collision risk level is determined based on the initial risk matrix by combining the correction of the behavior intentions to the collision probability, the road side blind area sensing data and the cloud traffic information, and corresponding active safety response operation is executed based on the comprehensive collision risk level. By adopting the method, the reliability, the accuracy and the timeliness of the rear traffic passing early warning can be comprehensively improved.
Inventors
- CHEN YUQIAO
- WANG QIUXIANG
- Du Xiuda
Assignees
- 中国第一汽车股份有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260202
Claims (10)
- 1. An intelligent rear traffic navigation early warning method is characterized by comprising the following steps: Acquiring multi-mode sensing data of a vehicle end, blind area sensing data of a road side and cloud traffic information; performing data-level pre-fusion on the vehicle-end multi-mode sensing data to generate a three-dimensional target with semantic tags and a motion track of the three-dimensional target; based on the motion trail and scene context information, predicting a plurality of probabilistic trails of the three-dimensional target in the future and corresponding behavior intentions of the probabilistic trails by utilizing a space-time diagram neural network model; Calculating an initial risk matrix based on a preset running path of the vehicle and the plurality of probabilistic tracks, wherein the initial risk matrix represents collision risks corresponding to the probabilistic tracks; based on the initial risk matrix, determining a comprehensive collision risk level by combining the behavior intention to correct collision probability, the road side blind area perception data and the cloud traffic information; And executing corresponding active safety response operation based on the comprehensive collision risk level.
- 2. The method of claim 1, wherein the obtaining the vehicle-side multi-modal awareness data, the road-side blind zone awareness data, and the cloud traffic information comprises: Acquiring the multi-mode sensing data of the vehicle end through solid-state laser radars, wide-angle cameras and short-range millimeter wave radars which are arranged at the tail part and at two sides of the vehicle; receiving the blind area sensing data acquired by the road side unit through V2I communication; And acquiring cloud traffic information comprising real-time traffic flow, historical accident data and a high-precision map from a cloud traffic platform through the vehicle-mounted T-Box.
- 3. The method of claim 1, wherein the performing data-level pre-fusion on the vehicle-side multi-modal awareness data to generate the three-dimensional object with the semantic tag and the motion trail thereof comprises: Projecting three-dimensional point cloud data acquired by the solid-state laser radar into an image coordinate system of the wide-angle camera to realize space-time alignment of the point cloud data and image pixels; Processing the fusion data after time-space alignment by using a PointPainting architecture-based deep learning network to realize joint identification and semantic segmentation of the targets; And carrying out multi-frame tracking on the identified target to generate the three-dimensional target with the semantic tag and the motion trail thereof.
- 4. The method according to claim 1, wherein predicting a plurality of probabilistic trajectories and corresponding behavioral intentions of the three-dimensional object in the future using a space-time neural network model based on the motion trajectories and scene context information comprises: Constructing a space-time diagram model by utilizing the motion trail and scene context information, wherein traffic participants are nodes and the interaction relationship among the participants is edges; taking historical track data, motion states and scene context information in the motion track as input features; and processing the input features through a space-time diagram neural network model, and outputting the multiple probabilistic trajectories and the corresponding behavior intention probability distribution of each target in a future preset period.
- 5. The method of claim 1, wherein the calculating an initial risk matrix based on the preset travel path of the host vehicle and the plurality of probabilistic trajectories comprises: For each track in the plurality of probabilistic tracks, calculating the collision time and collision probability of the track with a preset running path of the vehicle; and constructing the initial risk matrix containing the risk scores of all tracks according to the corresponding relation between the collision time and the collision probability.
- 6. The method of claim 1, wherein the determining the comprehensive collision risk level based on the initial risk matrix in combination with the behavior intent to correct the collision probability, the road side blind area sensing data and the cloud traffic information comprises: Weighting and correcting the collision probability in the initial risk matrix according to the probability distribution of the behavior intention; weighting and fusing the global decision advice provided by the road side blind area perception data, the regional risk coefficient provided by the cloud traffic information and the corrected risk matrix; And determining the comprehensive collision risk level through a preset risk level mapping rule based on the weighted fusion result.
- 7. The method of claim 1, wherein the performing a corresponding active safety response operation based on the integrated collision risk level comprises: And executing corresponding active safety response operation according to different intervals where the comprehensive collision risk level is located: When the comprehensive collision risk level is in a first interval, performing target visual prompt through an AR-HUD; when the comprehensive collision risk level is in a second interval, starting sound-light touch three-level alarm; and triggering autonomous emergency braking and linking road side warning equipment when the comprehensive collision risk level is in a third interval.
- 8. An intelligent rear traffic traversing early warning device, characterized in that the device comprises: The data acquisition module is used for acquiring vehicle-end multi-mode sensing data, road side blind area sensing data and cloud traffic information; The data-level pre-fusion module is used for carrying out data-level pre-fusion on the vehicle-end multi-mode sensing data to generate a three-dimensional target with semantic tags and a motion track thereof; The track intention prediction module is used for predicting a plurality of probabilistic tracks of the three-dimensional target in the future and corresponding behavior intentions of the probabilistic tracks by utilizing a space-time diagram neural network model based on the motion track and scene context information; The risk matrix calculation module is used for calculating an initial risk matrix based on a preset running path of the vehicle and the plurality of probabilistic tracks, wherein the initial risk matrix represents collision risks corresponding to the probabilistic tracks; The risk level determining module is used for determining a comprehensive collision risk level based on the initial risk matrix and combining the behavior intention to correct collision probability, the road side blind area sensing data and the cloud traffic information; and the safety response operation execution module is used for executing corresponding active safety response operation based on the comprehensive collision risk level.
- 9. A computer device comprising a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication via the bus when the computer device is in operation, the machine-readable instructions when executed by the processor performing the steps of the intelligent rear traffic traversing warning method in accordance with any one of claims 1 to 7.
- 10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the intelligent rear traffic traversing warning method according to any one of claims 1 to 7.
Description
Intelligent rear traffic navigation early warning method, device, equipment and storage medium Technical Field The application relates to the field of traffic management, in particular to an intelligent rear traffic navigation early warning method, device and equipment and a storage medium. Background The rear traffic navigation early warning (RCTA) is one of the core safety functions in the intelligent driving field, is mainly applied to the scenes of reversing, exiting, parallel driving, turning at an intersection and the like, and is used for pre-judging collision risks and reminding drivers by sensing traffic participants behind and behind sides of the vehicle so as to reduce the occurrence rate of collision accidents. The currently mainstream RCTA technology adopts a combination scheme of a millimeter wave radar and a camera, wherein the millimeter wave radar is responsible for accurate ranging and speed measurement, the camera provides target texture and semantic information, and the millimeter wave radar and the camera work cooperatively through a 'post fusion' or 'feature level fusion' mode, namely, each sensor is used for identifying targets independently, and then the identification results are compared and summarized. The prior art has the obvious defects that the fusion level is shallow, the data complementation advantages of the millimeter wave radar and the camera cannot be fully exerted, when the performance of a certain sensor is reduced or fails due to illumination, weather and other environmental factors, the overall perception reliability of the system is rapidly degraded, false alarm or missing alarm is easy to occur, and the safety early warning requirement of a complex traffic scene is difficult to meet. Disclosure of Invention In view of the above, the present application aims to provide an intelligent rear traffic navigation early warning method, device, equipment and storage medium, which can comprehensively improve the reliability, accuracy and timeliness of rear traffic navigation early warning. In a first aspect, an embodiment of the present application provides an intelligent rear traffic navigation early warning method, where the method includes: Acquiring multi-mode sensing data of a vehicle end, blind area sensing data of a road side and cloud traffic information; performing data-level pre-fusion on the vehicle-end multi-mode sensing data to generate a three-dimensional target with semantic tags and a motion track of the three-dimensional target; based on the motion trail and scene context information, predicting a plurality of probabilistic trails of the three-dimensional target in the future and corresponding behavior intentions of the probabilistic trails by utilizing a space-time diagram neural network model; Calculating an initial risk matrix based on a preset running path of the vehicle and the plurality of probabilistic tracks, wherein the initial risk matrix represents collision risks corresponding to the probabilistic tracks; based on the initial risk matrix, determining a comprehensive collision risk level by combining the behavior intention to correct collision probability, the road side blind area perception data and the cloud traffic information; And executing corresponding active safety response operation based on the comprehensive collision risk level. Optionally, the obtaining the vehicle-end multi-mode sensing data, the road-side blind area sensing data and the cloud traffic information includes: Acquiring the multi-mode sensing data of the vehicle end through solid-state laser radars, wide-angle cameras and short-range millimeter wave radars which are arranged at the tail part and at two sides of the vehicle; receiving the blind area sensing data acquired by the road side unit through V2I communication; And acquiring cloud traffic information comprising real-time traffic flow, historical accident data and a high-precision map from a cloud traffic platform through the vehicle-mounted T-Box. Optionally, the performing data-level pre-fusion on the multi-mode sensing data at the vehicle end to generate a three-dimensional target with a semantic tag and a motion track thereof includes: Projecting three-dimensional point cloud data acquired by the solid-state laser radar into an image coordinate system of the wide-angle camera to realize space-time alignment of the point cloud data and image pixels; Processing the fusion data after time-space alignment by using a PointPainting architecture-based deep learning network to realize joint identification and semantic segmentation of the targets; And carrying out multi-frame tracking on the identified target to generate the three-dimensional target with the semantic tag and the motion trail thereof. Optionally, based on the motion trail and the scene context information, predicting a plurality of probabilistic trails of the three-dimensional target in the future and corresponding behavior intentions by using a space-time