CN-122002523-A - Collaborative perception delay alignment method based on space-time co-sense calibration
Abstract
The invention provides a collaborative sensing delay alignment method based on space-time co-sensing calibration, which comprises the steps of receiving delay sensing data with a time stamp from at least one collaborative vehicle, utilizing historical sensing data and a historical alignment result, aligning the delay sensing data to the current moment through a time alignment model comprising a historical alignment attention mechanism, calculating the curvature of a motion state of a target vehicle, quantifying spatial inconsistency caused by the difference of delay and the motion state based on the aligned data and the curvature of the motion state, generating an inconsistency factor, constructing and adjusting an information matrix of a spatial co-sensing calibration network based on the inconsistency factor, optimizing the spatial co-sensing calibration network by using low-delay observation data as fixed constraint through a graph optimization algorithm, and realizing spatial consistency calibration of multi-view sensing data. The method realizes space-time double-domain collaborative optimization.
Inventors
- LIU JIANHANG
- ZHANG DIANZHENG
- Wu Jiangwan
Assignees
- 中国石油大学(华东)
Dates
- Publication Date
- 20260508
- Application Date
- 20260211
Claims (10)
- 1. A collaborative perceptual delay alignment method based on space-time co-sense calibration, comprising: Time co-sensing calibration, namely receiving delay sensing data with a time stamp from at least one cooperative vehicle, aligning the delay sensing data to the current moment through a time alignment model containing a history alignment attention mechanism by utilizing history sensing data and a history alignment result, and calculating the motion state curvature of a target vehicle; The spatial co-sensing calibration comprises the steps of quantifying spatial inconsistency caused by delay and motion state difference based on the aligned data and the motion state curvature, generating an inconsistency factor, constructing and adjusting an information matrix of a spatial co-sensing calibration network based on the inconsistency factor, optimizing the spatial co-sensing calibration network by using low-delay observation data as fixed constraint through a graph optimization algorithm, and achieving spatial consistency calibration of multi-view sensing data.
- 2. The method of claim 1, wherein in the time-wise co-sense calibration, using the historical sense data and the historical alignment result, the aligning the delay sense data to the current time by a time alignment model including a historical alignment attention mechanism comprises: Generating a preliminary alignment detection frame at the current moment through time coding based on the historical detection frame sequence; and taking the preliminary alignment detection frame as a query vector, taking the historical alignment detection frame at the previous moment as a key vector and a value vector, and executing multi-head attention calculation to generate a final alignment detection frame at the current moment.
- 3. The method of claim 1, wherein calculating a motion state curvature of the target vehicle comprises: calculating an instantaneous path curvature of the target vehicle based on the aligned position data; and comparing the instantaneous path curvature with a preset threshold value to judge whether the target vehicle is in a straight running state or a turning state.
- 4. A method according to claim 3, wherein quantifying spatial inconsistencies due to delay and motion state differences, generating an inconsistency factor comprises: calculating a position coordinate inconsistency factor according to the delay time of the delay perception data and the instantaneous path curvature by using a position coordinate adjustment factor; And calculating a yaw angle inconsistency factor through an angle adjustment factor according to the delay time and the instantaneous path curvature.
- 5. The method of claim 4, wherein quantifying spatial inconsistencies due to delays and motion state differences, generating an inconsistency factor further comprises: when the instantaneous path curvature is smaller than a threshold value, the motion state is judged to be straight running; When the instantaneous path curvature is greater than a threshold, the motion state is determined to be a dynamic turn.
- 6. The method of claim 1, wherein constructing and adjusting an information matrix of a spatial co-sensing calibration network based on the inconsistency factor comprises: Constructing a bipartite graph structure comprising a detection unit node set and a target unit node set, wherein the detection unit nodes correspond to vehicles for sending perception data, and the target unit nodes are generated by clustering similar detection frames from different detection units; Edges in the bipartite graph represent the observation relation of the detection unit node to the target unit node, and each edge is associated with relative posture information obtained based on the observation.
- 7. The method of claim 6, wherein the observing data with low latency as a fixed constraint comprises: In the generation process of the target unit node, judging the delay weight of the detection frame; if a detection frame with a delay weight larger than a preset threshold exists, marking a corresponding target unit node as a fixed node; In the graph optimization algorithm, the attitude parameters of the fixed nodes are kept unchanged and serve as constraint conditions.
- 8. The method of claim 1, wherein constructing and adjusting the information matrix of the spatial co-sensing calibration network based on the inconsistency factors further comprises introducing a delay information weighting mechanism in the aggregation of the target unit detection frames, specifically: When the delay weight of the detection frame is greater than a preset value, outputting as an aggregation result, wherein the delay weight is not modified as a hard constraint node to participate in optimization calculation in aggregation and graph optimization; And when the delay weight of the detection frame is smaller than or equal to a preset value, performing space aggregation according to the weight size distribution.
- 9. The method of claim 1, wherein in the time-sharing calibration step, further comprising: And in the processing process of the detection frames, dynamically adjusting the delay weight attribute of all the detection frames according to the delay time of the delay perception data, wherein the longer the delay time is, the lower the delay weight is.
- 10. The method of claim 6, wherein constructing and adjusting an information matrix of a spatial co-sensing calibration network based on the inconsistency factor further comprises: for each side in the bipartite graph, calculating the position confidence and yaw angle confidence of the observation represented by the side based on the inconsistency factor corresponding to the side, the speed change and the azimuth change of the observed vehicle; And updating the information matrix corresponding to the edge by using the position confidence coefficient and the yaw angle confidence coefficient.
Description
Collaborative perception delay alignment method based on space-time co-sense calibration Technical Field The invention relates to the technical field of computer network communication, and particularly provides a collaborative sensing delay alignment method based on space-time co-sense calibration. Background The inter-vehicle cooperative sensing technology expands the vehicle sensing range through information sharing, and can effectively improve the road safety index. However, communication delays between vehicles may reduce cooperative sensing accuracy, thereby affecting traffic safety. How to mitigate the delay effect becomes a great problem to be solved by the collaborative awareness technology. Delay alignment is an effective method for reducing the influence of delay, but when different delay data are aligned to the same time point due to the error generated in the alignment process and the difference of multiple viewing angles, the position information and the yaw angle of the same observation target have deviation. Such inconsistencies may accumulate and amplify during data fusion, ultimately resulting in a substantial reduction in perceived accuracy. Disclosure of Invention The invention provides a collaborative sensing delay alignment method based on space-time co-sensing calibration, which comprises the steps of receiving delay sensing data with a time stamp from at least one collaborative vehicle, utilizing historical sensing data and historical alignment results, aligning the delay sensing data to the current moment through a time alignment model comprising a historical alignment attention mechanism, calculating the motion state curvature of a target vehicle, wherein the delay sensing data comprises at least one of an intermediate feature map extracted from original sensor data, a target detection frame and self-posture information of a transmitting vehicle, the space co-sensing calibration comprises the steps of quantifying space inconsistency caused by delay and motion state differences based on the aligned data and the motion state curvature, generating an inconsistency factor, constructing and adjusting an information matrix of a space co-sensing calibration network based on the inconsistency factor, optimizing the space co-sensing calibration network through a map optimization algorithm by taking low delay observation data as a fixed constraint, and realizing the space consistency calibration of multi-view sensing data. Preferably, in the time-coherent calibration, using the historical sensing data and the historical alignment result, aligning the delay sensing data to the current time by a time alignment model including a historical alignment attention mechanism includes: Generating a preliminary alignment detection frame at the current moment through time coding based on the historical detection frame sequence; and taking the preliminary alignment detection frame as a query vector, taking the historical alignment detection frame at the previous moment as a key vector and a value vector, and executing multi-head attention calculation to generate a final alignment detection frame at the current moment. Preferably, calculating the motion state curvature of the target vehicle includes: calculating an instantaneous path curvature of the target vehicle based on the aligned position data; and comparing the instantaneous path curvature with a preset threshold value to judge whether the target vehicle is in a straight running state or a turning state. Preferably, quantifying spatial inconsistencies due to delay and motion state differences, generating the inconsistency factor comprises: calculating a position coordinate inconsistency factor according to the delay time of the delay perception data and the curvature of the motion state by using a position coordinate adjustment factor; And calculating a yaw angle inconsistency factor through an angle adjustment factor according to the delay time and the curvature of the motion state. Preferably, quantifying spatial inconsistencies due to delay and motion state differences, generating the inconsistency factor further comprises: when the curvature of the motion state is smaller than a threshold value, the motion state is judged to be straight running; And when the curvature of the motion state is larger than a threshold value, the motion state is judged to be dynamic turning. Preferably, constructing and adjusting the information matrix of the spatial co-sensing calibration network based on the inconsistency factor includes: Constructing a bipartite graph structure comprising a detection unit node set and a target unit node set, wherein the detection unit nodes correspond to vehicles for sending perception data, and the target unit nodes are generated by clustering similar detection frames from different detection units; Edges in the bipartite graph represent the observation relation of the detection unit node to the target unit node, and each edge is