Search

CN-122017866-A - Real-time positioning method for ground winding machine inspection robot

CN122017866ACN 122017866 ACN122017866 ACN 122017866ACN-122017866-A

Abstract

The invention discloses a real-time positioning method for a ground-surrounding machine inspection robot, which comprises the steps of obtaining wide-area point cloud data of an airplane, establishing a stable coordinate system taking the airplane as a reference by identifying and registering a plurality of semantic key points such as a nose cone, a wing tip and the like with a pre-stored template, obtaining laser radar and camera data in real time, inputting the laser radar and camera data into a pre-trained multi-mode deep learning model, outputting probability distribution representing the current pose of the robot under the airplane coordinate system by the model, taking the mean value of the probability distribution as an observation value, taking the variance as dynamic observation noise, and optimally fusing the dynamic observation noise with the motion prediction of the robot to generate a final smooth pose. The invention utilizes the uncertainty of the real-time monitoring pose, actively controls the robot to adjust the position to improve the perceived quality when the observation view angle is poor, realizes the relative positioning of the aircraft with high precision and high robustness, and has active perceived capability.

Inventors

  • PEI PEI
  • LU HAO
  • TONG YULIN
  • DING LIFENG
  • LI XUE
  • WANG BIN
  • ZONG GUOQING

Assignees

  • 南京禄口国际机场空港科技有限公司

Dates

Publication Date
20260512
Application Date
20251208

Claims (10)

  1. 1. The real-time positioning method for the ground winding machine inspection robot is characterized by comprising the following steps of: acquiring wide area point cloud data of the aircraft, identifying and positioning a plurality of preset semantic key points in the wide area point cloud data, and calculating an aircraft coordinate system taking the aircraft as a reference by registering observation positions of the semantic key points with a prestored key point diagram template of a corresponding model; after the aircraft coordinate system is determined, acquiring point cloud data and image data of the current moment acquired by a laser radar and a camera carried by the robot in real time, and inputting the point cloud data and the image data of the current moment into a pre-trained multi-mode depth learning model so as to output probability distribution for representing the current pose of the robot under the aircraft coordinate system; and determining the real-time pose of the robot under the plane coordinate system according to the probability distribution.
  2. 2. The method for real-time positioning of a ground-winding machine inspection robot according to claim 1, wherein the semantic key points are parts with stable three-dimensional structural features on an aircraft and are selected from at least two of nose cone vertexes, wing tips, engine nacelle centers, connection points of main landing gear and wings and vertical tail vertexes.
  3. 3. The method for real-time localization of a ground-winding machine inspection robot of claim 1, wherein the registering comprises: and adopting an algorithm based on random sample consistency to iteratively solve a rigid body transformation which minimizes the alignment error between the observed position of the semantic key point and the key point diagram template, and using the rigid body transformation to determine the aircraft coordinate system.
  4. 4. The method for real-time localization of a ground-winding machine inspection robot according to claim 1, further comprising, before inputting the point cloud data and the image data at the current time to a pre-trained multi-modal deep learning model: Identifying an airplane region in the image data by using an object detection model, and generating a two-dimensional boundary box; according to the two-dimensional boundary box and the camera parameters, calculating a target viewing cone in a three-dimensional space; And filtering the point cloud data by using the target viewing cone to remove irrelevant background point clouds.
  5. 5. The method for real-time localization of a ground-winding machine inspection robot of claim 1, wherein the probability distribution comprises at least: a six-dimensional mean vector for characterizing the most probable pose of the robot; And a six-dimensional variance vector for quantifying the degree of uncertainty in each dimension for the most likely pose.
  6. 6. The method for real-time localization of ground-based on-machine inspection robots of claim 5, wherein the multi-modal deep learning model is trained with a loss function targeting negative log-likelihood, the loss function simultaneously optimizing the accuracy of the predictions of the mean vector and the numerical rationality of the variance vector during training.
  7. 7. The method for real-time localization of a ground-based on-line inspection robot of claim 5, wherein determining the real-time pose of the robot in the aircraft coordinate system based on the probability distribution comprises: taking the six-dimensional mean vector output by the multi-modal deep learning model as an observation value, and converting the six-dimensional variance vector into an observation noise covariance matrix; and in a filter, updating a motion predicted value from an inertial measurement unit or a wheel speed meter by using the observed value and the observed noise covariance matrix to generate a final smooth pose.
  8. 8. The method for real-time positioning of a ground-based on-machine inspection robot according to claim 5, further comprising an active sensing judgment step comprising: And monitoring the numerical value of the six-dimensional variance vector in real time, and judging that the current observation visual angle is poor when at least one component or the combined value of the variance vector is continuously higher than a preset uncertainty threshold value in a preset time period.
  9. 9. The method for real-time positioning of a ground-based on-machine inspection robot according to claim 8, further comprising, after determining that the current viewing angle is not good: sending an instruction to a motion control system of the robot, and executing a preset repositioning action; and the repositioning action enables the robot to move to a new observation position, and new point cloud data and image data are acquired so as to reduce positioning uncertainty.
  10. 10. The method for real-time localization of a ground-winding machine inspection robot according to claim 1, wherein the multi-modal deep learning model adopts a structure including a cyclic network unit, and the structure further receives and utilizes hidden state information generated based on previous time data when processing sensor data at the current time, so as to realize time sequence smoothness of pose prediction.

Description

Real-time positioning method for ground winding machine inspection robot Technical Field The invention relates to the technical field of robot positioning and automatic detection, in particular to a real-time positioning method for a ground winding inspection robot. Background With the continuous improvement of the safety and efficiency requirements of the aviation industry, the adoption of automatic equipment for external inspection of an airplane has become the current mainstream development direction. Among various automation schemes, ground mobile robots exhibit great application potential due to their inherent advantages in terms of cruising ability, loading ability, and ground operation safety. Currently, the mainstream robot positioning technology is mainly divided into two types of schemes, namely a high-precision absolute positioning scheme based on a Global Navigation Satellite System (GNSS), particularly a real-time dynamic differential technology (RTK), and another type of scheme based on synchronous positioning and mapping (SLAM), wherein an environment map is built in real time through a laser radar (Lidar) or a Vision sensor (Vision) and the pose of the robot is tracked in the map. The technologies realize autonomous navigation of the robot to a certain extent, and lay a foundation for automatic inspection. However, when the above-described prior art is applied to an aircraft detouring inspection scene, it is difficult to satisfy high-reliability and high-precision work. Firstly, in the scenes of weak satellite signals or multipath interference of indoor machine libraries or partially shielded air decks and the like, the positioning accuracy of the positioning method based on GNSS can be obviously attenuated or even disabled, and the stable operation of all-weather and full-scene cannot be ensured. Second, SLAM-based positioning methods are highly dependent on the richness and stability of environmental features. However, the parking pose of the aircraft has a deviation from a millimeter level to a meter level each time, and the deviation can cause a huge dislocation between a preset inspection track of the robot and the actual position of the aircraft for a huge machine body. Meanwhile, the airport parking apron and other open scenes lack enough static references, and the SLAM system tracking is easy to fail or serious accumulated errors are generated. More importantly, the existing positioning method generally outputs a determined pose estimated value, and lacks quantitative evaluation on uncertainty of the estimated value, so that the robot cannot judge the reliability of the current positioning result, meanwhile, when the robot is faced with sensing degradation caused by instantaneous noise of a sensor or poor visual angle, the robustness of the robot positioning is poor, and strategies cannot be actively adopted to improve the sensing quality, so that the accuracy and the reliability of near-distance defect recognition are directly restricted. Disclosure of Invention This section is intended to outline some aspects of embodiments of the application and to briefly introduce some preferred embodiments. Some simplifications or omissions may be made in this section as well as in the description of the application and in the title of the application, which may not be used to limit the scope of the application. The present invention has been made in view of the above-described problems occurring in the prior art. Therefore, the invention provides a real-time positioning method for the ground winding machine inspection robot, which is used for solving the problems in the background technology. In order to solve the technical problems, the invention provides the following technical scheme that the real-time positioning method for the ground winding machine inspection robot comprises the following steps: acquiring wide area point cloud data of the aircraft, identifying and positioning a plurality of preset semantic key points in the wide area point cloud data, and calculating an aircraft coordinate system taking the aircraft as a reference by registering observation positions of the semantic key points with a prestored key point diagram template of a corresponding model; after the aircraft coordinate system is determined, acquiring point cloud data and image data of the current moment acquired by a laser radar and a camera carried by the robot in real time, and inputting the point cloud data and the image data of the current moment into a pre-trained multi-mode depth learning model so as to output probability distribution for representing the current pose of the robot under the aircraft coordinate system; and determining the real-time pose of the robot under the plane coordinate system according to the probability distribution. As an optimal scheme of the real-time positioning method for the ground-winding machine inspection robot, the semantic key points are parts with stable three-dimensional s