CN-121979274-A - Unmanned aerial vehicle following landing method based on UWB and vision dynamic fusion positioning
Abstract
The invention relates to the technical field of unmanned aerial vehicles, and discloses an unmanned aerial vehicle following landing method based on UWB and vision dynamic fusion, which comprises the steps of obtaining a first relative position deviation and a second relative position deviation; the method comprises the steps of obtaining a first positioning weight according to a first relative position deviation, obtaining a second positioning weight according to the first positioning weight, carrying out weighted fusion on the first positioning weight, the first relative position deviation, the second positioning weight and the second relative position deviation to obtain fusion deviation, generating a target position point to be carried out by the unmanned aerial vehicle according to the fusion deviation, and executing the steps in a circulating mode. According to the technical scheme, the problems of abrupt change of control instructions, flight jitter and tracking lag caused by simple switching or fixed weight fusion of UWB and visual positioning in the prior art are solved.
Inventors
- Lv Yuezu
- MA CHENCHAO
- ZHOU JIALING
- DUAN PEIHU
- WEN GUANGHUI
Assignees
- 北京理工大学
Dates
- Publication Date
- 20260505
- Application Date
- 20251219
Claims (10)
- 1. Unmanned aerial vehicle following landing method based on UWB and vision dynamic fusion is characterized by comprising the following steps: Obtaining a first relative position deviation of the unmanned aerial vehicle and the ground platform according to a visual positioning method, and obtaining a second relative position deviation of the unmanned aerial vehicle and the ground platform according to a UWB positioning method; Obtaining a first positioning weight according to the first relative position deviation, and obtaining a second positioning weight according to the first positioning weight; Weighting and fusing the first positioning weight, the first relative position deviation, the second positioning weight and the second relative position deviation to obtain a fused deviation; Generating a target position point to be carried out next step of the unmanned aerial vehicle according to the fusion deviation; The above steps are circularly performed.
- 2. The unmanned aerial vehicle following landing method based on UWB and vision dynamic fusion of claim 1, further comprising the steps of: judging whether the distance between the unmanned aerial vehicle onboard camera and the ground platform exceeds a visual field threshold value, and entering the following judgment: If not, controlling the unmanned aerial vehicle to follow the movement of the ground platform and land on the ground platform according to the fusion deviation obtained by the weighted fusion of the first positioning weight, the first relative position deviation, the second positioning weight and the second relative position deviation; If so, the second relative position deviation is directly used as the fusion deviation, and the unmanned aerial vehicle is controlled to follow the movement of the ground platform and land on the ground platform according to the fusion deviation.
- 3. The unmanned aerial vehicle following landing method based on UWB and visual dynamic fusion according to claim 1, wherein the specific method for obtaining the first relative position deviation of the unmanned aerial vehicle and the ground platform according to the visual positioning method comprises the following steps: Setting an identification code on a ground platform, and detecting the position and posture information of the identification code in real time by an unmanned aerial vehicle-mounted camera; and obtaining the first relative position deviation by using the position and posture information of the identification code.
- 4. The unmanned aerial vehicle following landing method based on UWB and visual dynamic fusion according to claim 1, wherein the specific method for obtaining the second relative position deviation of the unmanned aerial vehicle and the ground platform according to the UWB positioning method comprises the following steps: The method comprises the steps that a base station is used for obtaining global positions of an unmanned aerial vehicle and a ground platform, and absolute coordinates of the unmanned aerial vehicle and absolute coordinates of the ground platform are obtained through calculation respectively; And obtaining a second relative position deviation according to the absolute coordinates of the unmanned aerial vehicle and the absolute coordinates of the ground platform.
- 5. The unmanned aerial vehicle following landing method based on UWB and visual dynamic fusion according to claim 2, wherein the specific formula for obtaining the first positioning weight according to the first relative position deviation is: W V = M K F × distance Wherein M is an upper limit adjustment parameter of visual positioning weight, W V is a first positioning weight, K F is a change rate of the weight along with the distance, and distance is the distance between the unmanned aerial vehicle and the ground platform in a two-dimensional plane.
- 6. The unmanned aerial vehicle following landing method based on UWB and visual dynamic fusion according to claim 5, wherein the specific formula for obtaining the second positioning weight according to the first positioning weight is: W U = 1 W V Where W V is a first positioning weight and W U is a second positioning weight.
- 7. The unmanned aerial vehicle following landing method based on UWB and visual dynamic fusion according to claim 6, wherein the specific formula for obtaining the fusion deviation is: e F (t)= W V· e V (t)+ W U· e U (t) if diastance≤diastance v e F (t)= e U (t) if diastance>diastance v Where e F (t) is the fusion deviation, e V (t) is the first relative position deviation, e U (t) is the second relative position deviation, and distance V is the field of view threshold.
- 8. The unmanned aerial vehicle following landing method based on UWB and visual dynamic fusion according to claim 7, wherein the specific method for generating the target position point to be sent to the unmanned aerial vehicle in the next step according to the fusion deviation comprises the following steps: converting the fusion deviation into a control quantity through a controller; by incremental position control, specific target points are generated.
- 9. The unmanned aerial vehicle following landing method based on UWB and vision dynamic fusion of claim 8, wherein the specific method of generating specific target points by incremental position control comprises: acquiring the current actual position of the unmanned aerial vehicle in real time under a global coordinate system, and inputting the fusion deviation into a controller to acquire a speed control quantity; multiplying the speed control quantity by the control period to obtain a tiny position displacement vector in the period; And adding the calculated position displacement vector and the current position vector of the unmanned aerial vehicle to generate a specific target point of the next control period.
- 10. The unmanned aerial vehicle following landing method based on UWB and visual dynamic fusion according to claim 3, wherein the identification codes comprise a first identification code and a second identification code, and the second identification code is nested and arranged at the center position of the first identification code.
Description
Unmanned aerial vehicle following landing method based on UWB and vision dynamic fusion positioning Technical Field The invention belongs to the technical field of unmanned aerial vehicles, and particularly relates to an unmanned aerial vehicle following landing method based on UWB and vision dynamic fusion. Background The visual positioning technology relies on a camera to acquire external information, and the pose error of the unmanned aerial vehicle relative to the target platform is estimated in real time through image processing. However, the defects of visual positioning are obvious, firstly, the visual field of a camera is limited, when a target exceeds the visual field or is blocked, visual tracking is interrupted, secondly, a visual algorithm is greatly influenced by illumination change, flight attitude and angle change of the camera, accuracy can be reduced under a low-light or high-dynamic scene, especially, the visual field of an onboard camera fixedly connected with an unmanned aerial vehicle is particularly sensitive to the flight attitude angle during rapid movement, and in addition, feature points are reduced and positioning accuracy is reduced when the visual distance is far. In contrast, complementary thereto is UWB positioning technology. The UWB positioning provides positioning information according to TOF algorithm through ultra-wideband wireless signal propagation time ranging between the base station and the tag, has the advantages of being capable of providing global position information, having global covering capability, being free from the influence of ambient light and flight attitude, and besides, the UWB technology has the advantages of being strong in penetrating capability, high in anti-interference capability and the like, low in power consumption, easy to deploy, and suitable for guiding and positioning of a remote target by an unmanned aerial vehicle. However, the dynamic stability of UWB positioning is poor, and particularly when an unmanned aerial vehicle or an unmanned vehicle moves at high speed, the positioning is prone to drift, i.e. a dynamic error occurs, and the requirement of final accurate landing cannot be met. And the update frequency tends to be lower than the vision system. The fusion of UWB positioning technology and visual positioning technology may achieve complementary advantages. In the existing UWB and vision fusion positioning method, a simple static switching or fixed weight strategy is often adopted, namely, hard switching of different positioning methods is carried out according to a confidence threshold, or fixed weight coefficients are only allocated for the different positioning methods. Meanwhile, the traditional method can not dynamically adjust the fusion weight according to key factors such as distance, UWB data can not be fully utilized to carry out smooth complementation in the visual effective range, and effective redundancy is lacking when the vision is invalid, so that the unmanned aerial vehicle is finally insufficient in precision, smoothness and environmental adaptability in the following and landing processes. Based on this, this patent proposes unmanned aerial vehicle following landing method based on UWB and vision dynamic fusion. Disclosure of Invention The invention aims to provide an unmanned aerial vehicle landing method and equipment based on UWB positioning and visual positioning fusion, which solve the problems of abrupt change of control instructions, flight jitter and tracking lag caused by simple switching or fixed weight fusion of UWB and visual positioning in the prior art. The technical scheme of the invention is further described in detail through the drawings and the embodiments. The application discloses an unmanned aerial vehicle following landing method based on UWB and vision dynamic fusion, which comprises the following steps: Obtaining a first relative position deviation of the unmanned aerial vehicle and the ground platform according to a visual positioning method, and obtaining a second relative position deviation of the unmanned aerial vehicle and the ground platform according to a UWB positioning method; Obtaining a first positioning weight according to the first relative position deviation, and obtaining a second positioning weight according to the first positioning weight; Weighting and fusing the first positioning weight, the first relative position deviation, the second positioning weight and the second relative position deviation to obtain a fused deviation; Generating a target position point to be carried out next step of the unmanned aerial vehicle according to the fusion deviation; The above steps are circularly performed. In some aspects, the method further comprises the steps of: judging whether the distance between the unmanned aerial vehicle onboard camera and the ground platform exceeds a visual field threshold value, and entering the following judgment: If not, controlling the unmanned aerial