Search

CN-121979250-A - Unmanned aerial vehicle recovery multi-source data fusion positioning method

CN121979250ACN 121979250 ACN121979250 ACN 121979250ACN-121979250-A

Abstract

The invention discloses a multi-source data fusion positioning method for unmanned aerial vehicle recovery, and relates to the technical field of unmanned aerial vehicle positioning and control. The method for positioning the unmanned aerial vehicle by multi-source fusion improves the acquisition precision and stability of the recovery system to the unmanned aerial vehicle pose information, and simultaneously, the bearing and cooperative control of the multi-sensor based on the long-baseline multi-source sensor servo system effectively widens the sensing coverage range and the working distance of the system, and enhances the continuous tracking and accurate positioning capability to the unmanned aerial vehicle target under complex environmental conditions. The method comprises the specific processes that the unmanned aerial vehicle sends a recovery request and flight parameters to the ground end, the ground end controls the long baseline multi-source sensor servo system to track the unmanned aerial vehicle in real time after receiving the recovery request, and high-precision positioning and gesture resolving of the unmanned aerial vehicle are realized through visual target identification or fusion of laser radar point cloud and visual information for spatial registration. High-precision real-time positioning in the unmanned aerial vehicle recycling process is realized.

Inventors

  • WEI XIAOHUI
  • Lei xinyi
  • LIANG WEIHUA
  • YIN QIAOZHI
  • Yao Wanqi
  • ZHONG PEILIN

Assignees

  • 南京航空航天大学

Dates

Publication Date
20260505
Application Date
20260407

Claims (5)

  1. 1. The unmanned aerial vehicle recovery multi-source data fusion positioning method is characterized by being based on a long baseline multi-source sensor servo system, wherein the long baseline multi-source sensor servo system comprises a sensing servo unit with two multi-source sensing mechanisms and a communication fusion unit serving as a ground end, and comprises the following steps of: The unmanned aerial vehicle system comprises a ground terminal, a recovery command, a continuous real-time updating and sending of the unmanned aerial vehicle, a recovery request and the current flight parameters of the unmanned aerial vehicle, wherein the ground terminal confirms that the unmanned aerial vehicle enters a preset recovery airspace and meets the constraint requirements of preset altitude, course and speed; S2, when the ground end confirms that the recovery stage is entered and the unmanned aerial vehicle is far away from the recovery position, performing coarse pointing control on the two multi-source sensing mechanisms based on the flight parameters, and correcting the pitch angle and the azimuth angle to track the unmanned aerial vehicle; S3, after the unmanned aerial vehicle enters the coverage range of the multi-source sensing mechanism, selecting a visible light visual sensor or an infrared visual sensor to acquire images according to ambient illumination conditions, selecting a visible light visual sensor in daytime or with sufficient illumination, selecting an infrared visual sensor in low illumination, completing target identification and initial positioning based on a deep learning model; The method comprises the steps of S4, collecting and acquiring unmanned aerial vehicle point cloud information, on one hand, collecting point cloud data of two laser radar sensors after the unmanned aerial vehicle is close to a recovery position and enters an effective coverage area of the laser radar sensors in a multi-source sensing mechanism, uniformly converting the point cloud into a visual coordinate system, extracting a target point cloud by combining an identification result of a visible light visual sensor or an infrared visual sensor, and splicing and fusing the two part of point clouds to form an observation point cloud; And S5, based on the observation point cloud and the pre-constructed unmanned aerial vehicle characteristic point cloud, carrying out initial alignment by utilizing space priori information, constructing a point cloud neighbor matching relationship by KD-tree, introducing a Huber robust loss function to carry out optimization solution on the basis, and finally realizing high-precision position estimation of the unmanned aerial vehicle in a ground coordinate system.
  2. 2. The unmanned aerial vehicle recovery multisource data fusion positioning method according to claim 1, wherein the two multisource sensing mechanisms are symmetrically arranged with a base line origin as a center and face an unmanned aerial vehicle recovery area, and the distance between the two multisource sensing mechanisms is defined as a base line length; the multi-source sensing mechanism comprises a servo support, an inertial navigation base, a sensor carrier, an infrared vision sensor, a laser radar sensor and a visible light vision sensor; The inertial navigation base is rotatably arranged on the servo support and used for realizing self yaw calibration, the sensor carrier is rotatably arranged on the inertial navigation base and used for realizing self pitching calibration, a gyroscope is further arranged in the inertial navigation base and used for acquiring posture information of a multi-source sensing mechanism for calibration, the sensor carrier is in a shape like a Chinese character 'ji', a visible light vision sensor is arranged on the inner side of the sensor carrier, an infrared vision sensor is arranged on the outer side of the sensor carrier, and a laser radar sensor is arranged on the upper edge of the sensor carrier.
  3. 3. The unmanned aerial vehicle recovery multisource data fusion positioning method according to claim 1, wherein the rotation angle control method of the multisource sensing mechanism is as follows: The imaging compensation angle is formed by overlapping the expected rotation angle of the multi-source sensing mechanism with the physical rotation angle, and the imaging compensation angle is controlled by considering the change of the imaging compensation angle, wherein the imaging compensation angle is expressed as the following function: ; Wherein, the , The lateral and longitudinal pixel coordinates of the object in the image plane respectively, , Is the coordinates of the principal point of the image, , To image the lateral and longitudinal pixel dimensions, , The full angle of the visual field of the visual axis of the servo system in the transverse direction and the longitudinal direction is set; thus, the control amount can be expressed as: ; Wherein, the 、 Respectively an abscissa and an ordinate of the unmanned aerial vehicle in an imaging plane in the left multi-source sensing mechanism; 、 Respectively the abscissa and the ordinate of the unmanned aerial vehicle in the imaging plane in the right multisource sensing mechanism; meanwhile, a proportional controller and a differential controller are designed to control the multi-source sensing mechanism, and a left multi-source sensing mechanism is taken as an example, and control instructions are defined as follows: ; Wherein the method comprises the steps of , Is the gain of the proportional controller, , Is the gain of the differential controller and, 、 The control inputs of the system roll angle and yaw angle are respectively.
  4. 4. The unmanned aerial vehicle recovery multisource data fusion positioning method according to claim 1, wherein in the step S4, a visible light vision sensor or an infrared vision sensor is fused, and laser radar sensor data information is positioned in real time, specifically comprising the following steps: firstly, performing time synchronization on point cloud data and image data acquired by a visible light vision sensor or an infrared vision sensor and a laser radar sensor; When the unmanned aerial vehicle flies into the effective detection range of the laser radar sensor, the laser radar sensor collects the point cloud data of the target unmanned aerial vehicle, and records that the laser radar detects a point in the environment The coordinates in the laser radar coordinate system are The coordinates of the sensor under the coordinate system of the visible light vision sensor or the infrared vision sensor are And has Wherein And The rotation matrix and the translation matrix from the laser radar coordinate system to the visible light vision sensor or the infrared vision sensor coordinate system respectively, and the coordinates of the projection point cloud under the image coordinate system can be expressed as follows: In the following Representing the horizontal direction in the image, Representing the vertical direction in the image, K representing the transformation matrix; Performing target recognition on the unmanned aerial vehicle in the image data based on the deep learning model, screening out the pixel range of the image occupied by the target unmanned aerial vehicle, and recording as In the following 、 For the abscissa position of the object in the image, 、 Respectively an abscissa area and an ordinate area occupied by the target, and projecting to obtain Individual projected point cloud coordinates Screening, wherein the screening conditions are as follows: ; Fusing and calculating the processed different perception source data, and recording Is a laser radar sensor The four-dimensional homogeneous coordinate representation form of the obtained target point cloud single point data is set as follows: namely, laser radar Coordinate system to laser radar Conversion relation between coordinate systems, wherein And Respectively a rotation relation and a translation relation, wherein each laser radar point cloud is positioned in the laser radar The coordinate vector in the coordinate system can be expressed as: , is the number of point clouds generated by the xth radar, The method comprises the steps that an original point cloud set acquired by a laser radar A is acquired; Obtaining the laser radar Spliced point cloud data in coordinate system 。
  5. 5. The unmanned aerial vehicle recovery multi-source data fusion positioning method according to claim 4, wherein in step S4, observation point clouds are obtained based on laser radar sensor scanning and splicing And the characteristic point cloud constructed in advance The method has the advantages that the position of the unmanned aerial vehicle in the ground coordinate information is accurately estimated by combining a space prior and a robust optimization point cloud registration method, and the method comprises the following steps: firstly, performing point cloud rough registration based on centroid differences; Observation point cloud data And feature point cloud The method has similar spatial distribution, and the initial alignment of the point clouds is realized by calculating the difference between the geometric centroids of the two groups of point clouds as initial translation estimation; secondly, performing neighbor screening on the construction based on KD-tree matching; performing spatial index on the characteristic point cloud by using a KD-tree, and selecting k points closest to each observation point in the characteristic point cloud as candidate matching pairs; Finally, optimizing the model; in order to realize space alignment between the characteristic point cloud of the unmanned aerial vehicle and the observation point cloud of the ground sensor, an unconstrained optimization model taking the total matching distance between the minimum characteristic point and the observation point as an objective function is firstly established: ; Wherein, the Representing the total point number of the ground sensor observation point cloud, Representing the first point in the ground sensor observation point cloud The number of the observation points is equal to the number of the observation points, Is a translation vector, and each observation point in the observation point cloud Is allocated to the unmanned aerial vehicle characteristic points after translation transformation Is the nearest neighbor point in (a); Then a Huber robust loss function is introduced, and an unconstrained continuous optimization model is constructed: ; Wherein, the , As a parameter of the threshold value, Representing the translation vector of the unmanned aerial vehicle characteristic point cloud in space, And the matching weight of the observation point cloud and the unmanned aerial vehicle characteristic point cloud is obtained.

Description

Unmanned aerial vehicle recovery multi-source data fusion positioning method Technical Field The invention relates to the technical field of unmanned aerial vehicle positioning and control, in particular to an unmanned aerial vehicle recovery multi-source data fusion positioning method based on a long baseline multi-source sensor servo system. Background In recent years, the small and medium-sized unmanned aerial vehicle has the advantages of low borrowing cost, strong maneuverability, flexible deployment, strong task adaptability and the like, and is widely applied to the fields of logistics transportation, emergency rescue, environment monitoring and the like. Along with the continuous expansion of application scenes, the requirements for quick recovery and repeated utilization of unmanned aerial vehicles after tasks are executed are increasingly highlighted. The current common overhead hook and collision net recovery mode has higher requirements on the flight track and attitude control precision of the unmanned aerial vehicle, and once the positioning or guiding errors are overlarge, recovery failure is easy to cause and even safety risks are caused. Therefore, in order to improve the safety of the recovery process and the overall operation efficiency of the system, it is highly desirable to improve the accuracy and instantaneity of the pose information acquisition of the unmanned aerial vehicle in the recovery stage, so as to achieve more reliable and stable recovery guidance and positioning. In the recovery process of the existing unmanned aerial vehicle, the target positioning is independent of satellite navigation equipment such as an onboard GPS or RTK and the like to acquire position information. However, the positioning mode is easily affected by factors such as measurement errors, drift accumulation, electromagnetic interference, multipath effects and the like, stable and accurate pose data are difficult to continuously output at the recovery end section, and the high-precision track control requirement is difficult to meet. Therefore, it is often necessary to introduce a ground positioning system to assist in guiding and accurately modifying the drone. Currently, the common ground positioning system mostly adopts an optical positioning mode, such as a monocular or binocular vision system to perform target detection and position estimation on the unmanned aerial vehicle. The system can realize higher-precision positioning under the conditions of sufficient illumination and good sight, but the imaging quality and the recognition stability of the system are obviously reduced under the complex environments such as night, backlight, rain and fog or limited sight, and the continuous and precise positioning in the recovery process is difficult to ensure. Disclosure of Invention The invention aims at the problems, provides the unmanned aerial vehicle recovery multi-source data fusion positioning method based on the long-baseline multi-source sensor servo system, and improves the acquisition precision and stability of the recovery system to the unmanned aerial vehicle pose information by adopting the multi-source fusion positioning method, and meanwhile, the bearing and cooperative control of the multi-source sensor servo system to the various sensors based on the long-baseline multi-source sensor effectively widens the sensing coverage range and working distance of the system, and enhances the continuous tracking and accurate positioning capability to the unmanned aerial vehicle target under complex environmental conditions, thereby improving the applicability and reliability of the unmanned aerial vehicle recovery system. The technical scheme of the invention is that the long baseline multisource sensor servo system is based on the long baseline multisource sensor servo system, the long baseline multisource sensor servo system comprises a sensing servo unit with two multisource sensing mechanisms and a communication fusion unit serving as a ground end, and the method comprises the following steps: The unmanned aerial vehicle system comprises a ground terminal, a recovery command, a continuous real-time updating and sending of the unmanned aerial vehicle, a recovery request and the current flight parameters of the unmanned aerial vehicle, wherein the ground terminal confirms that the unmanned aerial vehicle enters a preset recovery airspace and meets the constraint requirements of preset altitude, course and speed; S2, when the ground end confirms that the recovery stage is entered and the unmanned aerial vehicle is far away from the recovery position, performing coarse pointing control on the two multi-source sensing mechanisms based on the flight parameters, and correcting the pitch angle and the azimuth angle to track the unmanned aerial vehicle; S3, after the unmanned aerial vehicle enters the coverage range of the multi-source sensing mechanism, selecting a visible light visual sensor or an infrare