Search

CN-122015833-A - Mobile robot vision track recognition and deviation correction method

CN122015833ACN 122015833 ACN122015833 ACN 122015833ACN-122015833-A

Abstract

The invention discloses a mobile robot visual trajectory recognition and deviation correction method, which belongs to the technical field of robot navigation and comprises the steps of firstly, adjusting sensor parameters, intercepting a target path, converting the sensor parameters into a desired trajectory, synchronously collecting visual and inertial data, extracting trajectory characteristic points from double paths of visual images, assigning confidence level, calibrating the inertial data, reducing noise, pre-integrating, solving pose increment, obtaining transverse and heading deviation of a robot and a desired trajectory through inverse perspective transformation, judging whether visual information is degraded or not according to the confidence level of the characteristic points, directly inputting the deviation into an adaptive coupling controller when the visual information is normal, inputting the deviation into the controller after estimating the deviation through extended Kalman filtering, outputting driving wheel correction amount, applying the driving wheel correction amount to control parameters, and periodically updating the sensor parameters.

Inventors

  • YANG LONG
  • Xiong jiao
  • Zang Sudong
  • ZHU WENFENG
  • LI JIAN
  • ZHANG YONGPENG
  • WANG KANGBIN

Assignees

  • 上海恒泽辅汇智能科技有限公司
  • 上海恒启智向智能科技有限公司

Dates

Publication Date
20260512
Application Date
20260416

Claims (11)

  1. 1. A method for identifying and correcting visual trajectory of mobile robot, comprising: S1, acquiring pre-stored target path information and a visual sensor identifier, and calling a corresponding sensor parameter set according to the visual sensor identifier; s2, converting target path information into a desired trajectory based on a sensor parameter set, and synchronously acquiring a visual image acquired by a visual sensor and original inertial data acquired by an inertial measurement unit; S3, extracting features of the visual image to obtain trajectory feature points and confidence coefficient thereof, performing zero offset calibration and noise suppression on the original inertial data, and then pre-integrating to calculate the pose increment of the robot; s4, performing inverse perspective transformation on the track characteristic points by using a sensor parameter set to obtain the transverse deviation and the course deviation of the robot and the expected track; S5, if the confidence coefficient of the track characteristic points of the current visual image is higher than a threshold value, and no continuous multi-frame track characteristic points are lost, and the transverse deviation is in a preset deviation allowable range, inputting the transverse deviation and the course deviation of the robot and the expected track into the self-adaptive coupling controller, and outputting the steering angle correction quantity and the wheel speed correction quantity of the driving wheel, otherwise, judging that the visual information is degraded, obtaining the predicted transverse deviation and the course deviation between the robot and the expected track based on the robot pose increment and the historical pose information, inputting the predicted transverse deviation and the predicted course deviation into the self-adaptive coupling controller, and outputting the steering angle correction quantity and the wheel speed correction quantity of the driving wheel; And S6, applying the steering angle correction quantity and the wheel speed correction quantity of the driving wheel to the driving wheel control parameters of the robot, and simultaneously updating the sensor parameter set.
  2. 2. The method for identifying and correcting the visual trajectory of a mobile robot according to claim 1, wherein the converting the target path information into the desired trajectory based on the set of sensor parameters comprises: The method comprises the steps of obtaining a sensor parameter set, carrying out projection transformation on three-dimensional coordinates of pre-stored path points under a world coordinate system based on camera internal parameters and distortion coefficients in the sensor parameter set, mapping the three-dimensional coordinates to a pixel coordinate system to generate a pixel coordinate sequence, establishing an inverse perspective transformation model from the pixel coordinate system to a robot body coordinate system by combining the installation height and the pitch angle, converting the pixel coordinate sequence to the robot body coordinate system through the inverse perspective transformation model to obtain a discrete expected trajectory point sequence, and generating an expected trajectory function by adopting a cubic spline interpolation algorithm based on the expected trajectory point sequence under the robot body coordinate system.
  3. 3. The method for identifying and correcting the visual trajectory of a mobile robot according to claim 1, wherein the step of extracting features of the visual image to obtain the trajectory feature points and the confidence levels thereof comprises the steps of: S3.1, executing a first feature extraction path and a second feature extraction path on a visual image in parallel, wherein the first feature extraction path adopts a light neural network model based on depth separable convolution, performs pixel-level semantic segmentation on the visual image, and outputs a pixel region probability map for identifying a track region; And S3.2, carrying out connected domain analysis and central line extraction on the pixel region probability map to obtain a first candidate track point sequence based on semantic segmentation, and carrying out line segment clustering and fitting based on direction and position constraint on the geometric edge line segment set to obtain a second candidate track edge line pair based on geometric edges.
  4. 4. The method for identifying and correcting deviation of visual trajectory of mobile robot according to claim 3, wherein extracting features of said visual image to obtain trajectory feature points and confidence levels thereof, further comprises: And S3.3, if the position coincidence degree of the first candidate track point sequence and the second candidate track edge line pair in the image space is higher than a preset first threshold value, fusing the first candidate track point sequence and the second candidate track edge line pair, taking the middle line of the second candidate track edge line pair as a final track characteristic point, setting the confidence degree of the track characteristic point as a first confidence value, if only the first candidate track point sequence is effectively detected, setting the first candidate track point sequence as the track characteristic point, the confidence degree as a second confidence value, if only the second candidate track edge line pair is effectively detected, setting the middle line as the track characteristic point, setting the confidence degree as the second confidence value, and if neither the first candidate track point sequence nor the second candidate track edge line pair is effectively detected, determining that the track characteristic point of the frame is missing, and the confidence degree is zero.
  5. 5. The method for identifying and correcting the visual trajectory of a mobile robot according to claim 1, wherein the step of performing zero offset calibration and noise suppression on the raw inertial data and then pre-integrating the raw inertial data to calculate the pose increment of the robot comprises the steps of: The method comprises the steps of obtaining original angular velocity and original acceleration data output by an inertial measurement unit, carrying out zero offset calibration on the original angular velocity and the original acceleration data by adopting a sliding window mean value method to obtain calibrated angular velocity and acceleration data, carrying out noise suppression on the calibrated angular velocity and acceleration data by using a first-order low-pass filter, carrying out pre-integral operation on the angular velocity and acceleration data after noise suppression in a time interval corresponding to the acquisition time of two adjacent frames of visual images, and solving to obtain relative position increment, relative velocity increment and relative gesture increment of a robot relative to an initial time in the corresponding time interval, thereby jointly forming the robot pose increment.
  6. 6. The method for identifying and correcting the visual trajectory of the mobile robot according to claim 2, wherein the step S4 comprises the following steps: s4.1, back projecting pixel coordinates of the trajectory characteristic points to a normalized camera coordinate system taking a camera optical center as an origin according to camera internal parameters, and calculating a homography transformation matrix from the normalized camera coordinate system to the robot body coordinate system based on an installation height and a pitch angle; s4.2, transforming the trajectory characteristic point coordinates under the normalized camera coordinate system to the robot body coordinate system by applying the homography transformation matrix to obtain transverse coordinates of the trajectory characteristic points under the robot body coordinate system; s4.3, taking the confidence coefficient corresponding to each track characteristic point as a weight, and carrying out weighted average on the transverse coordinates of all the track characteristic points under a robot body coordinate system to obtain the transverse deviation between the robot and the expected track; S4.4, under a pixel coordinate system, using longitudinal pixel coordinates of all the trajectory characteristic points as independent variables and using transverse pixel coordinates as independent variables, adopting a least square method to fit a fitting straight line, and calculating an included angle between the fitting straight line and the longitudinal axis of the image to be used as an image course angle; S4.5, converting the image course angle into a robot body coordinate system based on camera internal parameters and installation pitch angles to obtain an actual course angle of the trajectory in the current visual image, and calculating a difference value between the actual course angle and a tangential direction angle of an expected trajectory function at the current position of the robot to serve as course deviation.
  7. 7. The method for identifying and correcting the visual trajectory of the mobile robot according to claim 1, wherein the condition for determining that the visual information is degraded in S5 is based on a comprehensive evaluation of a confidence sequence and feature geometric continuity of the trajectory feature points of the multi-frame visual image in a dynamic sliding window, and specifically comprises: s5.1, calculating the confidence coefficient sliding average value of N frames before the current moment t ; S5.2, calculating the position offset of the current frame track characteristic point and the previous frame track characteristic point under the robot body coordinate system after coordinate transformation Offset from direction ; S5.3 if And the continuous frame number M is greater than the set first frame number threshold Then a first type of persistent low confidence degradation is determined, wherein, Represents a first confidence threshold, an ; S5.4 if But is provided with Or (b) If the geometric mutation degradation is larger than the rationality deviation threshold dynamically calculated according to the current speed of the robot, the geometric mutation degradation of the second class is judged, wherein, Indicating the confidence level of the current frame, Represents a second confidence threshold, and ; S5.5 if in the continuous K frames, the confidence is in And (3) with The number of oscillations in between exceeds a preset oscillation number threshold Judging that the third-class confidence oscillation is degraded; And S5.6, if any one of the conditions of S5.3, S5.4 and S5.5 is met, triggering a visual information degradation mark.
  8. 8. The method for identifying and correcting visual trajectory of mobile robot according to claim 1, wherein obtaining predicted lateral and heading deviations between the robot and the desired trajectory based on the robot pose increment and the historical pose information comprises: When the visual information degradation is judged, a state estimation algorithm based on an extended Kalman filter is started, the position, the course angle, the transverse deviation and the course deviation of a robot at the previous moment are used as state vectors, the pose increment of the robot is used as a control input vector, a nonlinear state transition equation describing the evolution of the state vectors along with the control input vector is established based on a robot kinematics model, and the nonlinear state transition equation is subjected to first-order Taylor expansion at the current state estimation value to be linearized, so that a state transition matrix is obtained; And carrying out prior estimation on the state vector by using the state transition matrix and the current robot pose increment, and outputting a transverse deviation value and a heading deviation value contained in the prior estimation as predicted transverse deviation and heading deviation.
  9. 9. The method for identifying and correcting the visual trajectory of a mobile robot according to claim 1, wherein the adaptive coupling controller is a parallel coupling structure of a proportional-integral-derivative controller and a sliding mode variable structure controller; the proportional-integral-derivative controller calculates a basic steering control quantity and a basic wheel speed control quantity according to the weighted sum of the transverse deviation and the course deviation; The sliding mode variable structure controller defines a sliding mode surface function according to the change rate of the transverse deviation, calculates a robust compensation control quantity, adds the basic steering control quantity and a steering component corresponding to the robust compensation control quantity to obtain a final steering angle correction quantity, and adds the basic wheel speed control quantity and a wheel speed component corresponding to the robust compensation control quantity to obtain a final wheel speed correction quantity.
  10. 10. The method for recognizing and correcting the visual trajectory of a mobile robot according to claim 1, wherein said applying the steering angle correction amount and the wheel speed correction amount of the driving wheel to the driving wheel control parameters of the robot comprises: s6.1, acquiring a current steering angle set value of a steering servo motor of the robot and a current wheel speed set value of a wheel hub driving motor; S6.2, adding the steering angle correction to the current steering angle set value to obtain an updated steering angle control instruction, and adding the wheel speed correction to the current wheel speed set value to obtain an updated wheel speed control instruction; and S6.3, respectively sending the updated steering angle control instruction and the updated wheel speed control instruction to a steering servo motor and a wheel hub driving motor through a bottom layer driving controller of the robot to be executed.
  11. 11. The method for identifying and correcting the visual trajectory of a mobile robot according to claim 1, wherein said updating the set of sensor parameters comprises: s6.4, periodically recalculating a camera internal reference matrix and a distortion coefficient vector through an online calibration algorithm in the running process of the robot; S6.5, measuring the actual distance between the robot and the ground in real time through a laser ranging sensor arranged on the robot, and taking the actual distance as an updated installation height value; S6.6, calculating the pitch angle of the updated camera relative to the horizontal plane by combining the fixed installation included angle between the camera and the robot body according to the pitch angle attitude information of the robot calculated by the inertial measurement unit; And S6.7, storing and updating the recalculated camera internal parameter matrix, the distortion coefficient vector, the updated installation height value and the updated pitch angle into the sensor parameter set.

Description

Mobile robot vision track recognition and deviation correction method Technical Field The invention belongs to the technical field of robot navigation, and particularly relates to a method for identifying and correcting visual trajectories and deviations of a mobile robot. Background The trajectory tracking and deviation correction of the mobile robot is a core technology in the autonomous navigation field, is widely applied to scenes such as industrial inspection, warehouse logistics, service distribution and the like, and directly determines the running stability and the operation reliability of the robot in recognition precision and correction real-time. The current robot trajectory identification is mostly dependent on a single vision sensor, trajectory information is obtained through image feature extraction, but is influenced by factors such as illumination change, ground shielding, environmental texture deletion and the like, the problems of feature point detection failure and confidence factor dip easily occur, the trajectory identification is interrupted, and deviation cannot be accurately solved. Meanwhile, the existing trajectory deviation correction method usually adopts a single control algorithm, has strong dependence on visual sensing, lacks an effective multisource information fusion and degradation processing mechanism, and is difficult to realize continuous deviation prediction and correction when visual information is invalid, so that the robot is easy to have the conditions of trajectory deviation and running out of control. In addition, the sensor parameters are fixed values calibrated offline, and small changes of camera installation height and pitch angle and drift of internal parameters and distortion coefficients in the running process of the robot can cause accumulated errors in coordinate transformation and trajectory calculation, so that trajectory tracking precision is further reduced. In the prior art, the problems that the synchronism of vision and inertial sensing data is poor, the extraction mode of trajectory features is single, the confidence degree weight of the feature points is not considered in deviation calculation and the like are solved, so that the suitability of a robot to a complex scene is insufficient, and the robustness and accuracy of trajectory tracking are difficult to meet the actual application demands. Therefore, the track recognition and deviation correction method which integrates multi-source sensing information, has the visual information degradation processing capability and can dynamically update the sensor parameters is developed and becomes a key requirement for the development of the autonomous navigation technology of the mobile robot. Disclosure of Invention Aiming at the defects of the prior art, the invention provides a visual trajectory recognition and deviation correction method for a mobile robot, which comprises the steps of firstly, taking sensor parameters, intercepting a target path, converting the sensor parameters into a desired trajectory, synchronously collecting visual and inertial data, extracting trajectory characteristic points from double paths of visual images, assigning confidence level, calibrating noise reduction pre-integration to the inertial data, resolving pose increment, obtaining transverse and heading deviation of the robot and the desired trajectory through inverse perspective transformation, judging whether visual information is degraded or not according to the confidence level of the characteristic points, directly inputting the deviation into an adaptive coupling controller during normal time, inputting the deviation into the controller after estimating the deviation through extended Kalman filtering during degradation, outputting driving wheel correction amount and applying the driving wheel correction amount to control parameters, and periodically updating the sensor parameters. In order to achieve the above purpose, the present invention provides the following technical solutions: A mobile robot vision track recognition and deviation correction method comprises the following steps: S1, acquiring pre-stored target path information and a visual sensor identifier, and calling a corresponding sensor parameter set according to the visual sensor identifier; s2, converting target path information into a desired trajectory based on a sensor parameter set, and synchronously acquiring a visual image acquired by a visual sensor and original inertial data acquired by an inertial measurement unit; S3, extracting features of the visual image to obtain trajectory feature points and confidence coefficient thereof, performing zero offset calibration and noise suppression on the original inertial data, and then pre-integrating to calculate the pose increment of the robot; s4, performing inverse perspective transformation on the track characteristic points by using a sensor parameter set to obtain the transverse deviation and