Search

CN-121613940-B - Visual guidance autonomous docking method and system for intelligent cabinet by unmanned aerial vehicle

CN121613940BCN 121613940 BCN121613940 BCN 121613940BCN-121613940-B

Abstract

The invention belongs to the technical field of unmanned aerial vehicle visual guidance, and in particular relates to an autonomous docking method and system for visual guidance of an unmanned aerial vehicle to an intelligent cabinet, wherein the method comprises the steps of obtaining a motion blur index according to the distribution characteristics of image gradient amplitude values; the method comprises the steps of determining a self-adaptive sharpening coefficient based on a motion blur index, extracting a high-frequency detail layer, carrying out weighting enhancement to obtain an enhanced docking image, extracting contour features and texture features in the enhanced docking image, obtaining confidence weights of the texture features according to scale occupation factors of the contour features, respectively calculating pose according to the contour features and the texture features, carrying out weighting fusion on translation vectors and rotation vectors of two groups of pose by using the confidence weights, and carrying out unmanned aerial vehicle docking according to the fused pose. According to the invention, through self-adaptive sharpening and feature fusion, the problem of inaccurate positioning caused by motion blurring and scale change is effectively solved, and smooth docking of the unmanned aerial vehicle is realized.

Inventors

  • FAN WEI
  • QIU CHANGWEI
  • FEI RUIYANG
  • CHU XIN
  • LI QUANLIN

Assignees

  • 武汉市哈哈便利科技有限公司

Dates

Publication Date
20260508
Application Date
20260203

Claims (8)

  1. 1. The vision guiding autonomous docking method of the unmanned aerial vehicle to the intelligent cabinet is characterized by comprising the following steps of: Obtaining a gradient amplitude value graph of pixel points in the gray scale butt joint graph, and obtaining a motion blur index according to the average value and standard deviation of all the pixel points in the gradient amplitude value graph and the number of the pixel points with the gradient amplitude value larger than a strong edge threshold value; Obtaining an enhanced butt-joint diagram according to a high-frequency detail layer, a motion blur index and the gray butt-joint diagram which are obtained by carrying out Laplace convolution on the gray butt-joint diagram, extracting outline features and texture features in the enhanced butt-joint diagram, taking the ratio of the number of pixels of a region surrounded by the outline features to the total number of pixels of the enhanced butt-joint diagram as a scale duty factor, and obtaining confidence coefficient weight of the texture features according to the scale duty factor and a preset scale switching threshold; the method comprises the steps of calculating contour features through a PnP algorithm to obtain a first pose, decomposing the first pose into a first translation vector and a first rotation vector, calculating texture features through the PnP algorithm to obtain a second pose, decomposing the second pose into a second translation vector and a second rotation vector, carrying out weighted summation on the first translation vector and the second translation vector by using confidence weight of the texture features to obtain a final translation vector, and carrying out weighted summation on the first rotation vector and the second rotation vector to obtain a final rotation vector; The motion blur index satisfies the relation: ; for the motion blur index of the joint map, To interface the total number of pixels in the map, For the number of pixels with gradient magnitude greater than the strong edge threshold, As the standard deviation of all pixels in the gradient magnitude map, Is the average value of all pixel points in the gradient amplitude diagram, To prevent a small constant with zero denominator, For the preset sensitivity adjustment coefficient, , Is an exponential function based on natural constants; The confidence weights satisfy the relationship: ; For the confidence weights of the texture features, As a scale-up factor, For a preset scale-switching threshold value, For the sharpness coefficient of the weight switch, 。
  2. 2. The autonomous docking method for the unmanned aerial vehicle to the intelligent cabinet according to claim 1, wherein the pixel values of the pixel points in the enhanced docking diagram satisfy the relation: ; in the formula, To enhance the butt-joint diagram at coordinates The pixel value at which it is located, For the pixel values of the gray scale docking map, At the coordinates for the high frequency detail layer The pixel value at which it is located, For the motion blur index of the joint map, The gain is enhanced for a predetermined basis, Is the preset maximum dynamic gain.
  3. 3. The method for automatically docking the unmanned aerial vehicle to the intelligent cabinet through visual guidance, which is disclosed by claim 1, is characterized by extracting outline features and texture features in an enhanced docking diagram, comprising the steps of binarizing and communicating domain analysis of the enhanced docking diagram, taking the outermost closed edge of a docking mark as the outline features, and extracting a set of corner points as the texture features in an area surrounded by the outline features.
  4. 4. The method for autonomous docking of an unmanned aerial vehicle to an intelligent cabinet according to claim 3, wherein the extracting the set of corner points as texture features comprises extracting the set of corner points as texture features by a FAST operator.
  5. 5. The method of autonomous docking of an unmanned aerial vehicle to a smart cabinet according to claim 1, wherein the final translation vector satisfies the relationship: ; in the formula, For the final translation vector to be used, As a result of the first translation vector, Is a second translation vector; is the confidence weight of the texture feature.
  6. 6. The method for vision-guided autonomous docking of an unmanned aerial vehicle to an intelligent cabinet according to claim 1, wherein the final rotation vector satisfies the relationship: ; in the formula, As a result of the final rotation vector, As a result of the first rotation vector being, Is a second rotation vector; is the confidence weight of the texture feature.
  7. 7. The method for automatically docking the unmanned aerial vehicle to the intelligent cabinet through visual guidance according to claim 1 is characterized in that the unmanned aerial vehicle docking is performed according to a final translation vector and a final rotation vector, the method comprises the steps of performing space-time alignment and data packaging on the final translation vector and the final rotation vector, constructing a six-degree-of-freedom pose state matrix containing three-dimensional space position information and three-axis pose orientation information, inputting the pose state matrix into a flight control system of the unmanned aerial vehicle as a real-time navigation target instruction, calculating position deviation and pose deviation through a PID (proportion integration differentiation) controller, and driving the unmanned aerial vehicle to adjust the rotating speed of a rotor wing, so that a body coordinate system of the unmanned aerial vehicle is gradually approximated to and finally coincides with a preset docking coordinate system of the intelligent cabinet.
  8. 8. A visual guidance autonomous docking system for an unmanned aerial vehicle to a smart cabinet, comprising a processor and a memory, the memory storing computer program instructions that when executed by the processor implement a visual guidance autonomous docking method for an unmanned aerial vehicle to a smart cabinet according to any of claims 1-7.

Description

Visual guidance autonomous docking method and system for intelligent cabinet by unmanned aerial vehicle Technical Field The invention relates to the technical field of unmanned aerial vehicle vision guidance. More particularly, the invention relates to a visual guidance autonomous docking method and system for an intelligent cabinet by an unmanned aerial vehicle. Background The unmanned aerial vehicle intelligent cabinet is used as an important infrastructure for unmanned aerial vehicle automatic operation, can provide services such as automatic take-off and landing, charging and changing electricity, goods access and the like for the unmanned aerial vehicle, and plays a key role in the fields such as logistics distribution, electric power inspection, intelligent city management and the like. Accurate butt joint of unmanned aerial vehicle and intelligent cabinet is the key link of guaranteeing above-mentioned service to go on smoothly. In the docking process, the unmanned aerial vehicle usually utilizes the carried visual sensor to identify the docking mark on the intelligent cabinet, and the unmanned aerial vehicle is guided to land or enter the cabin through solving the relative pose of the mark and the camera. In the related art, for example, a Chinese patent application document with a publication number of CN119270912A discloses an unmanned aerial vehicle visual guidance accurate landing method, which comprises the steps of capturing environment information through a camera of the unmanned aerial vehicle, constructing a flight control model, setting a desired landing point, establishing visual mapping, obtaining a predicted running track according to relative position data and a distance to be flown, resolving relative pose information of the unmanned aerial vehicle and the desired landing point in real time in the landing process, and adjusting flight parameters in combination with the predicted track, so that the unmanned aerial vehicle visual guidance landing is realized. However, the related art mainly focuses on the design and trajectory prediction of the back-end flight control law, but ignores the robustness problem of the front-end visual perception in a complex outdoor environment. In addition, the related technology mainly relies on a single code identification angular point to carry out pose resolving, does not consider the scale change rule of visual features under different docking distances, and when the unmanned aerial vehicle is switched in a far-near distance or part of features are invisible, jump or loss of pose resolving data is easy to generate, so that the high-precision smooth control requirement of the whole docking process of the intelligent cabinet cannot be met. Disclosure of Invention In order to solve the technical problems that the feature extraction precision is damaged due to motion blur in an outdoor complex environment and the pose calculation in the whole butt joint process is unstable due to neglect of feature scale change, the invention provides the following aspects. In a first aspect, the invention provides a visual guidance autonomous docking method of an unmanned aerial vehicle to an intelligent cabinet, which comprises the steps of obtaining a docking diagram of a docking mark and graying to obtain a gray docking diagram, obtaining gradient amplitude values of pixel points in the gray docking diagram to form a gradient amplitude diagram, obtaining a motion blur index according to the mean value and standard deviation of all the pixel points in the gradient amplitude diagram and the number of pixel points with gradient amplitude larger than a strong edge threshold, obtaining an enhanced docking diagram according to a high-frequency detail layer obtained by Laplace convolution of the gray docking diagram, the motion blur index and the gray docking diagram, extracting contour features and texture features in the enhanced docking diagram, taking the ratio of the number of pixels of a region surrounded by the contour features to the total number of pixels of the enhanced docking diagram as a scale occupation factor, obtaining confidence weight of the texture features according to the scale occupation factor and a preset scale switching threshold, obtaining a first pose by a PnP algorithm, decomposing the contour features to obtain a second pose by the PnP algorithm, decomposing the texture features to obtain the second pose, decomposing the second pose by the PnP algorithm to the texture feature, decomposing the second pose to the second translation vector, and obtaining a final translation vector by the second translation vector, and finally obtaining a final weighting vector by the second translation vector and the final weighting vector. According to the method, the motion blur index reflecting the image definition state is calculated through obtaining the butt-joint image acquired by the unmanned aerial vehicle vision sensor and counting the distribution charact