CN-122020571-A - Aircraft three-dimensional obstacle avoidance sensing method and system based on radar vision fusion
Abstract
The invention provides an aircraft three-dimensional obstacle avoidance sensing method and system based on radar vision fusion, relates to the technical field of autonomous obstacle avoidance control of an aircraft, and solves the problems of low accuracy, slow response, high omission factor and the like of the existing obstacle avoidance scheme. The method comprises the steps of firstly obtaining guide radar data and path image data, performing time synchronization alignment, then performing feature extraction and cross correlation by using a pre-constructed lightweight feature fusion network, obtaining a multi-mode feature map after fusion, generating a dynamic space grid containing predicted flight paths and detection obstacles according to the multi-mode feature map and current flight state data of an aircraft, optimizing the precision of the dynamic space grid by using a preset loss function, and outputting a decision result containing maneuvering instructions of the aircraft based on the optimized dynamic space grid. The method has better promotion in the aspects of pose sensing precision, obstacle detection rate, real-time performance and stability, and ensures the three-dimensional obstacle avoidance sensing effect of the aircraft.
Inventors
- CAO KUN
- YANG LICHUAN
- ZHOU TAO
- SHAO JINYI
- NING WENHUI
- CHEN SENLIN
Assignees
- 四川腾盾科技有限公司
- 四川腾盾良远智能科技有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20260413
Claims (10)
- 1. The three-dimensional obstacle avoidance sensing method of the aircraft based on radar vision fusion is characterized by comprising the following steps of: S1, acquiring guiding radar data from an external radar system and path image data from a guiding camera on an aircraft in the process that the aircraft flies to a target, and performing time synchronization alignment; s2, performing feature extraction and cross correlation on the synchronized guiding radar data and path image data by using a pre-constructed lightweight feature fusion network, and obtaining a multi-mode feature map after fusion; s3, generating a dynamic space grid comprising a predicted flight path and detection obstacles according to the multi-mode feature map and current flight state data of the aircraft; and S4, optimizing the precision of the dynamic space grid by using a preset loss function, and outputting a decision result containing maneuvering instructions of the aircraft based on the optimized dynamic space grid to finish the three-dimensional obstacle avoidance perception of the aircraft.
- 2. The three-dimensional obstacle avoidance awareness method of an aircraft of claim 1 wherein in step S1, the relative angle between the guidance camera and the aircraft is simultaneously acquired to characterize the pointing deviation of the guidance camera during the flight of the aircraft to the target; When time synchronization alignment is performed, the time stamp of the guiding radar data is taken as a reference, the respective delays of the path image data and the aircraft attitude information are compensated in an interpolation mode, and then the geodetic pose of the guiding camera is calculated, wherein the geodetic pose is used for assisting in generating the dynamic space grid in the step S3.
- 3. The method for three-dimensional obstacle avoidance perception of an aircraft according to claim 2, wherein when calculating the geodetic pose of the guiding camera, a proportional guiding method is used to determine the predicted flight path of the aircraft, and the direct calculation is performed in combination with the relative angle between the guiding camera and the aircraft pose information, as follows: In the formula, The position and the posture of a geodetic coordinate system of the guiding camera; guiding a gesture rotation matrix for the aircraft, and obtaining based on the aircraft gesture information; Obtaining, for a relative rotation matrix of the guidance camera, based on a relative angle between the guidance camera and the aircraft; When time synchronization alignment is performed, for the data frames with the jump distance exceeding the preset threshold value in the guiding radar data, the time stamp is still normally used as a reference, but a low confidence coefficient mark is additionally added, and the corresponding weight is reduced when feature fusion is performed in the step S2.
- 4. The three-dimensional obstacle avoidance sensing method of an aircraft according to claim 1, wherein in step S2, according to the synchronized guiding radar data and path image data, the distance information in the guiding radar data and the pixel information in the path image data are correspondingly associated to be used as input of a lightweight feature fusion network, the lightweight feature fusion network adopts ShuffleNetV network which takes the distance information in the guiding radar data as depth priori data, strengthens the area of the path image data matched with the guiding radar data through a spatial attention mechanism, and inhibits the area of the path image data irrelevant to a flight path.
- 5. The method for three-dimensional obstacle avoidance awareness of an aircraft of claim 4 wherein the fused calculation of the ShuffleNetV network is as follows: In the formula, The multi-mode feature map after fusion comprises visual pixel information and radar distance information; a visual feature map extracted from the path image data for ShuffleNetV a network; The radar characteristic diagram is obtained by converting the guiding radar data; The method comprises the steps of calculating similarity between visual features and radar features, taking the radar features as dominant features, dynamically distributing feature weights during fusion, wherein the higher the confidence coefficient corresponding to a radar feature map is, the higher the feature weights are, and taking the visual features as dominant features if the confidence coefficient of the radar feature map is lower than a preset threshold; The fused multi-mode characteristic diagram shows the characteristics of the obstacle on the flight path of the aircraft to the target, so that the corresponding pixel of the obstacle in the path image data and the radar distance corresponding to the pixel are represented.
- 6. The three-dimensional obstacle avoidance sensing method of an aircraft according to claim 1, wherein in the step S3, the current flight state data of the aircraft comprise the pose of a geodetic coordinate system of a guiding camera and the current speed of the aircraft, the three-dimensional space is divided into the same or different preset distance ranges corresponding to the forward, backward, left and right sides and the near side of a predicted flight path according to the multi-modal feature map and the flight state data by taking the current position of the aircraft as an origin, so as to generate a dynamic space grid, the dynamic space grid adopts an adaptive resolution adjustment strategy, a dense coding mode is applied to the region which is separated from the predicted flight path by a first preset distance range, and a sparse sampling mode is applied to the region which is separated from the predicted flight path by a second preset distance range, wherein the maximum value of the first preset distance range is smaller than the minimum value of the second preset distance range.
- 7. The method of claim 6, wherein during the flight of the aircraft to the target, the dynamic space grid is updated incrementally according to each frame output of the multi-modal feature map, each frame output is added with a new grid in the three-dimensional space of the predicted flight path, and the area outside the three-dimensional space that has been flown after the displacement of the aircraft is deleted; For a region separated from the predicted flight path by a third preset distance range, a full resolution modeling mode is applied; the set corresponding to the third preset distance range is a subset of the set corresponding to the first preset distance range, and the minimum value of the third preset distance range is equal to the minimum value of the first preset distance range; and when any data frame is subjected to the file retention storage, adopting an octree structure compression mode to store the data for the area outside the third preset distance range.
- 8. The method of claim 1, wherein in step S4, the predetermined loss function is a weighted loss function having the following function formula: In the formula, To the point of Preset weights corresponding to the loss items are obtained; A re-projection error penalty for a critical obstacle; Is a general penalty to a dynamic spatial grid; Is background loss; Loss of constraint for radar; is a path consistency loss; As the aircraft gradually flies to the target, after the distance between the aircraft and the target is smaller than a preset distance threshold value, the radar constraint loss is reduced Weights of (2) Increasing the preset magnitude value based on the initial value, and adaptively reducing the preset magnitude value by the sum of the rest weights By a neighborhood convolution of a preset size.
- 9. The three-dimensional obstacle avoidance sensing method of the aircraft of claim 8, wherein the method is characterized in that based on the optimized dynamic space grid, a dynamic threshold decision mode is adopted, and when an obstacle feature appears in a predicted area in the dynamic space grid and a IoU value corresponding to the obstacle feature reaches a preset early warning threshold value, a multi-stage early warning decision is triggered; outputting an aircraft hard avoidance maneuver instruction that includes lateral acceleration constraints for a high risk class; outputting an aircraft attitude change instruction containing pitch angle adjustment for the risk level; For low risk levels, only the proportional guidance method is used to update the flight path plan and update the predicted flight path of the aircraft.
- 10. An aircraft three-dimensional obstacle avoidance awareness system for implementing the aircraft three-dimensional obstacle avoidance awareness method of any one of claims 1 to 9, the system comprising the following functional modules: the data synchronization module is used for acquiring guiding radar data from an external radar system and path image data from a guiding camera on an aircraft and performing time synchronization alignment; The feature fusion module is used for carrying out feature extraction and cross correlation on the guiding radar data and the path image data by using a pre-constructed lightweight feature fusion network, and obtaining a multi-mode feature map after fusion; The grid generation module is used for generating a dynamic space grid comprising a predicted flight path and detection obstacles according to the multi-mode feature map and current flight state data of the aircraft; the loss optimization module is used for optimizing the precision of the dynamic space grid by using a preset loss function, and outputting a decision result containing the maneuvering instructions of the aircraft based on the optimized dynamic space grid.
Description
Aircraft three-dimensional obstacle avoidance sensing method and system based on radar vision fusion Technical Field The invention relates to the technical field of autonomous obstacle avoidance control of aircrafts, in particular to an aircraft three-dimensional obstacle avoidance sensing method and system based on radar vision fusion. Background In the autonomous navigation and task execution process of the low-altitude aircraft, the three-dimensional environment perception capability is a key basis for guaranteeing the safe and efficient operation of the low-altitude aircraft. Particularly in complex urban or near-ground environments, aircraft are required to maneuver at high speeds while avoiding various static and dynamic obstacles such as high-rise buildings, transmission towers, communication facilities and other airborne targets. Such a scenario places extremely high demands on the accuracy, real-time and robustness of the perception system. The current mainstream three-dimensional environment sensing method relies on pure vision or laser radar schemes, and is good in partial static or low-speed scenes, but still faces significant challenges under high-speed and strong maneuvering conditions. The existing three-dimensional reconstruction method based on vision generally relies on a vision odometer to perform pose estimation, however, when an aircraft performs high dynamic operations such as sharp turning, diving or rapid turning, pose estimation errors are rapidly accumulated, so that a constructed environment model is severely deviated, and then obstacle positioning is invalid, and reliable obstacle avoidance decision is difficult to support. In addition, although external ranging information sources such as distance data from a remote guidance system are already provided in part of tasks, the prior pure vision perception framework is not generally and effectively fused with such prior information, so that depth estimation accuracy in a middle-distance area is insufficient, and anti-collision requirements of a high-risk area are difficult to meet. The traditional three-dimensional perception system usually adopts a voxel network or a feature map with a fixed range to carry out environment modeling, so that the calculation resources are uniformly distributed, and dynamic focusing is not carried out on a key area in front of a flight path. This static modeling strategy not only results in a large number of redundant calculations, but also results in a delay in the environmental update of high risk areas, failing to complete obstacle detection and response within a limited time window. Meanwhile, the existing perception model mostly adopts a globally consistent loss function in the training or optimizing process, and cannot give higher priority to specific obstacles (such as slender towers, dense building groups and the like) with high threat in a task scene, so that the omission rate of key obstacles is higher, and the navigation safety of an aircraft is seriously threatened. Therefore, the existing three-dimensional environment sensing technology still has obvious defects in the aspects of dynamic adaptability, multi-source information fusion, scene focusing capability, obstacle priority modeling and the like, and the comprehensive requirements of accurate, real-time and reliable obstacle avoidance sensing of the aircraft under the high-speed and complex environments are difficult to meet. Therefore, a novel three-dimensional obstacle avoidance sensing method capable of combining external ranging guiding information, having dynamic modeling capability and enhancing key obstacle sensing is needed to improve autonomous safe flight capability of an aircraft in a complex low-altitude environment. Disclosure of Invention The invention aims to solve the problems of low precision, slow response, high omission ratio and other limitations of the existing three-dimensional obstacle avoidance sensing method caused by large pose error, lack of external ranging information fusion, static redundancy of environment modeling, insufficient key obstacle sensing and the like in a high-speed maneuvering scene, and provides an aircraft three-dimensional obstacle avoidance sensing method and system based on radar vision fusion. According to the invention, the three-dimensional environment sensing scheme of multi-source fusion-dynamic focusing-risk priority is formed by integrally utilizing the distance information of radar guidance and the stable visual angle of the stability-enhanced camera, so that the three-dimensional obstacle avoidance sensing of the aircraft is realized. The method has better promotion in the aspects of pose sensing precision, obstacle detection rate, real-time performance and stability, and ensures the advantage effect of three-dimensional obstacle avoidance sensing of the aircraft on the whole. The invention adopts the following technical scheme to achieve the purpose: an aircraft three-dimension