CN-122009247-A - Night scene-oriented automatic driving perception planning collaboration method and system
Abstract
The invention belongs to the technical field of automatic driving, and provides an automatic driving perception planning collaboration method and system for night scenes, which are used for solving the problem that the system-level safety and smoothness are reduced due to the existing path planning method. The automatic driving perception planning collaboration method for the night scene comprises the steps of utilizing a trained automatic driving multitask perception model to multitask perceive input night near infrared images and laser point cloud data, generating a plurality of candidate tracks based on the current motion state of a vehicle and a navigation target path, sampling the candidate tracks to form a candidate track set, taking the selected optimal track as a vehicle control instruction through evaluation, realizing a closed-loop decision based on uncertainty evaluation, and finally enhancing the overall safety, smoothness and reliability of an automatic driving vehicle in a complex night scene.
Inventors
- ZHANG QIANG
- LIU CHENGHAO
- Zhang Daikairui
Assignees
- 山东大学
Dates
- Publication Date
- 20260512
- Application Date
- 20260414
Claims (10)
- 1. An automatic driving perception planning coordination method oriented to night scenes is characterized by comprising the following steps: acquiring a night near infrared image and laser point cloud data, performing multi-task perception by using a trained automatic driving multi-task perception model, and uniformly converting a multi-task perception result into a vehicle coordinate system; generating a plurality of candidate tracks based on the current motion state of the vehicle and the navigation target path, and sampling the candidate tracks to form a candidate track set; Carrying out safety evaluation, smoothness evaluation and efficiency evaluation on each candidate track in the candidate track set, carrying out weighted summation on each evaluation cost value after normalization processing to obtain a comprehensive cost value, and selecting a track with the minimum comprehensive cost value as an optimal track; and taking the selected optimal track as a vehicle control instruction and transmitting the vehicle control instruction to a vehicle control executor to form a complete automatic driving decision link for sensing, planning and control.
- 2. The night scene oriented automatic driving perception planning collaboration method of claim 1, wherein the multi-task perception results comprise a target detection result, a drivable region segmentation mask, a lane line detection result and a material classification result, wherein the target detection result is converted into obstacle information, the drivable region segmentation mask is converted into a feasible region boundary parameter, the lane line detection result is converted into a parameterized lane center line, and the material classification result is converted into a road friction coefficient.
- 3. The night scene oriented autopilot awareness planning collaborative method of claim 1 wherein the candidate trajectory set is characterized as : ; ; Wherein, the Is the first The candidate trajectory is a candidate trajectory for the user, For the number of candidate tracks, Is the first A sequence of positions of the candidate tracks, In order to be a sequence of speeds, In the form of a sequence of curvatures, Is an acceleration sequence; For the arc length along the reference line, To plan the lateral length.
- 4. The night scene oriented autopilot awareness planning collaborative method of claim 1 wherein the process of training an autopilot multitasking awareness model is: The original night near infrared image is enhanced and then spliced with the original night near infrared image to form a dimming image group; After the laser point cloud data and the light-changing image group are spatially aligned, point cloud features and image features are extracted; taking the point cloud features as query, taking the image features as keys and values, performing cross-modal attention fusion on the point cloud features and the image features to obtain enhanced fusion features and extracting sharing features from the enhanced fusion features; extracting initial semantic features corresponding to the parallel decoding tasks, and respectively fusing the shared features and the initial semantic features through a gating mechanism to obtain final fused features corresponding to the parallel decoding tasks; and performing parallel decoding on each final fusion feature to obtain a corresponding decoding result, and training a pre-constructed automatic driving multi-task perception model by combining a multi-task loss function.
- 5. The night scene oriented automatic driving perception planning cooperative method of claim 4, wherein the original night near infrared image is enhanced by using a depth curve estimation network, a multi-exposure image is generated and spliced with the original night near infrared image in a channel dimension to form a dimming image group, and the process of generating the multi-exposure image is as follows: ; Wherein, the Is a multi-exposure image; Is the first An exposure transformation function; Is an original night near infrared image; is a parameter for network learning to control the brightness stretching and contrast balance at different exposures.
- 6. The night scene oriented automatic driving perception planning collaboration method of claim 4, wherein in the process of fusing the shared features and the initial semantic features through a gating mechanism, the fused weights of the shared features and the initial semantic features are generated through the gating mechanism ; ; Wherein, the Is a fusion weight; Is a weight matrix of gated linear transformation; Is a bias term for the gating mechanism, both of which are learnable parameters; Is a shared feature; Is each initial semantic feature; Representing characteristic stitching; The Sigmoid function is represented.
- 7. The night scene oriented autopilot awareness planning collaborative method of claim 4 wherein the multitasking loss function is: ; Wherein, the Is the total loss function; detecting a loss for the target; in order to partition the Dice loss, Classifying cross entropy loss for the material; , , is a learnable task uncertainty parameter.
- 8. Night scene oriented automatic driving perception planning co-ordination system, characterized in that it is based on a night scene oriented automatic driving perception planning co-ordination method as claimed in any of claims 1-7, comprising: The multi-task sensing and converting module is used for acquiring night near infrared images and laser point cloud data, performing multi-task sensing by using the trained automatic driving multi-task sensing model, and uniformly converting the multi-task sensing result into a vehicle coordinate system; The candidate track set forming module is used for generating a plurality of candidate tracks based on the current motion state of the vehicle and the navigation target path, and sampling the plurality of candidate tracks to form a candidate track set; The optimal track screening module is used for carrying out safety evaluation, smoothness evaluation and efficiency evaluation on each candidate track in the candidate track set, carrying out weighted summation on each evaluation cost value after normalization processing to obtain a comprehensive cost value, and selecting a track with the minimum comprehensive cost value as an optimal track; and the automatic driving decision link forming module is used for taking the selected optimal track as a vehicle control instruction and transmitting the vehicle control instruction to a vehicle control executor to form a complete automatic driving decision link for perception, planning and control.
- 9. A computer readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the night scene oriented autopilot awareness planning cooperative method of any one of claims 1-7.
- 10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor, when executing the program, implements the steps in the night scene oriented autopilot awareness planning co-ordination method as claimed in any one of claims 1 to 7.
Description
Night scene-oriented automatic driving perception planning collaboration method and system Technical Field The invention belongs to the technical field of automatic driving, and particularly relates to an automatic driving perception planning collaboration method and system for night scenes. Background The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art. The reliable operation of the existing automatic driving system in the night or low-light environment faces serious challenges, and the core reasons of the existing automatic driving system can be summarized into the inherent technical limitation of the sensing module at night and the lack of effective cooperative architecture design between the sensing and planning modules. In the sense level, the main stream relies on a multi-mode scheme of a visible light camera and a laser radar to have fundamental defects at night. Under the complex interference of low illumination, glare, reflection and the like, the visible light camera has serious degradation of image quality, so that the semantic perception capability depending on textures is obviously reduced. Although the laser radar is not affected by illumination, the sparsity of the point cloud limits the detection capability of a long-distance small target, and semantic understanding cannot be independently completed. The existing multi-mode fusion algorithm is designed aiming at diurnal data, so that the problem of unbalanced characteristics between the image quality dip at night and the sparsity of point clouds is difficult to effectively balance. In addition, in a night scene, the demand conflict of different tasks on the characteristics is aggravated by the network facing the multitasking, and a serious negative migration phenomenon is often caused, so that the overall performance of the model is unstable and the reliability is insufficient. The degradation of the sensing capability is further amplified by the splitting of the sensing and planning module in the existing architecture. Current systems typically pass the results of each perceived task to the planning module in an independent, discrete data structure. This lack of interaction of the unified environment model representation makes it difficult for the planning layer to accurately understand and exploit the uncertainties inherent in the perceived results. The planning algorithm may make overly aggressive or conservative decisions at night, essentially resulting in reduced system level security and smoothness, due to the inability to quantitatively evaluate these perceived uncertainties. Disclosure of Invention In order to solve the technical problems, the invention provides an automatic driving perception planning cooperative method and system for night scenes, which can realize closed-loop decision based on uncertainty evaluation and finally enhance the overall safety, smoothness and reliability of an automatic driving vehicle in complex night scenes. In order to achieve the above purpose, the present invention adopts the following technical scheme: The first aspect of the invention provides an automatic driving perception planning coordination method facing to night scenes. In one or more embodiments, an automatic driving perception planning collaboration method for night-time scenes is provided, including: acquiring a night near infrared image and laser point cloud data, performing multi-task perception by using a trained automatic driving multi-task perception model, and uniformly converting a multi-task perception result into a vehicle coordinate system; generating a plurality of candidate tracks based on the current motion state of the vehicle and the navigation target path, and sampling the candidate tracks to form a candidate track set; Carrying out safety evaluation, smoothness evaluation and efficiency evaluation on each candidate track in the candidate track set, carrying out weighted summation on each evaluation cost value after normalization processing to obtain a comprehensive cost value, and selecting a track with the minimum comprehensive cost value as an optimal track; and taking the selected optimal track as a vehicle control instruction and transmitting the vehicle control instruction to a vehicle control executor to form a complete automatic driving decision link for sensing, planning and control. The multi-task sensing result comprises a target detection result, a drivable region segmentation mask, a lane line detection result and a material classification result, wherein the target detection result is converted into obstacle information, the drivable region segmentation mask is converted into a feasible region boundary parameter, the lane line detection result is converted into a parameterized lane center line, and the material classification result is converted into a road friction coefficient. As one embodiment, the candidate t