CN-121157044-B - Visual servo method for monocular hand-eye robot with three-dimensional space dynamic depth estimation
Abstract
The invention discloses a monocular hand-eye robot visual servo method for three-dimensional space dynamic depth estimation, which comprises the steps of establishing a parameterized visual servo system model, establishing an estimation error driving type self-adaptive law and executing visual servo control based on position. According to the invention, the unified parameterized model of the robot, the camera and the target is established, and the target movement and the depth change caused by the robot movement are estimated under the same frame, so that the invention can effectively cope with the more common and more complex working condition that both the robot and the target move. The invention solves the problem that the nonlinear time-varying depth is difficult to measure under the conditions that the prior geometric knowledge of an observed object is not available and the target and the robot move in the visual servo of the monocular camera, and realizes the rapid convergence of control errors and estimation errors in the hand-eye robot system.
Inventors
- HUANG YINGBO
- HU SONGSONG
- WANG XIAN
- HE HAORAN
- WANG SHUBO
- LIAO ZHIJING
Assignees
- 昆明理工大学
Dates
- Publication Date
- 20260508
- Application Date
- 20251111
Claims (3)
- 1. The monocular hand-eye robot visual servo method for three-dimensional space dynamic depth estimation is characterized by comprising the following steps of: S1, establishing a parameterized visual servo system model, namely performing unified parameterized modeling on a closed-loop system formed by a robot, a camera and a target according to a forward kinematics model of the hand-eye robot, internal parameters, external parameters of the camera and an image jacobian matrix; S2, constructing an estimated error driving type adaptive law, namely, aiming at the system model obtained in the step S1, introducing an auxiliary filtering variable to reconstruct a measurable signal, extracting estimated error information related to unknown depth parameters, designing an adaptive law meeting the Lyapunov convergence condition based on the estimated error information, carrying out online real-time estimation on a time-varying depth to obtain a depth estimated value, and guaranteeing convergence of an estimated error in a provable manner; s3, performing visual servo control based on the position, namely back projecting the image plane characteristic points to a robot base coordinate system by utilizing the depth estimated value obtained in real time in the step S2 to obtain an estimated value of a target three-dimensional position; The step S2 specifically comprises the following steps: S201, aiming at a parameterized visual servo system model, defining various filtering variables as follows: ; Wherein the method comprises the steps of Is a filter coefficient, and is set to a very small constant value; S202, defining an intermediate matrix , : ; Wherein, the For ensuring intermediate regression matrices Bounded, constant Acting as a forgetting factor as a small constant, if The smaller the matrix, the more contained in the matrix The solution of the formula is: ; Wherein, the Is an estimation error; is the residual error of the derivative of the time-varying parameter after low-pass filtering; s203, defining auxiliary variables , : ; Thus, it is known that the estimation error of the time-varying parameter has been included in the auxiliary variable In the method, an adaptive estimation law driven by an estimation error is constructed through auxiliary variables: ; Wherein, the Is a constant learning gain; is also a constant for balancing the ability to estimate fast-varying parameters and robustness, calculated from the adaptive law Then obtain through integration And because of So that an estimated value of depth information can be obtained Reconstructing three-dimensional estimated coordinates of the target feature points under the robot base coordinate system through pixel point coordinates of the target feature points participated in inside and outside the camera Participate in visual servo control.
- 2. The monocular hand-eye robot vision servo method of three-dimensional space dynamic depth estimation according to claim 1, wherein the step S1 specifically comprises the steps of: s101, defining three-dimensional space coordinates of target feature points under a camera coordinate system as The coordinates of the point projected onto the camera imaging plane are The three-dimensional space coordinates in the camera coordinate system have the following relation with the imaging plane coordinates: ; S102, defining the coordinates of the feature points in the pixel plane as The pixel plane coordinates and the imaging plane coordinates have the following relationship: ; Wherein, the Respectively camera edges The location of the principal point of the shaft, Respectively camera edges Focal length of axes, together forming a camera reference matrix : ; S103, combining the three-dimensional space coordinates of the step S101 with the imaging plane coordinate relation, and combining the pixel plane coordinates of the step S102 with the imaging plane coordinate relation: ; taking the time derivative of the formula: ; Correlating the image feature speed with the spatial speed of the camera: ; if the coordinates of the feature point do not change with time, then The camera speed in the camera coordinate system, if the coordinates of the feature point change with time, the camera speed is determined Subtracting the characteristic point speed under the camera coordinate system from the camera speed under the camera coordinate system; And (3) making: ; simplifying and rewriting the formula to ; S104, utilizing the forward kinematics of the robot to obtain a rotation matrix between the base coordinate system and the tail end coordinate system of the robot Translation vector Acquiring a rotation matrix between a robot terminal coordinate system and a camera coordinate system by using camera external parameter calibration Translation vector And converting the terminal speed in the base coordinate system into the camera speed in the camera coordinate system by utilizing a rotation matrix and a translation vector between the coordinate systems, wherein the conversion relation is as follows: ; Wherein, the Is the tip speed in the base coordinate system; is the camera speed in the camera coordinate system; represented by translation vector A diagonal symmetry matrix formed by the three parameters; s105, passing through a jacobian matrix of the robot Joint speed with robot , For the degree of freedom of the robot, the tip speed in the base coordinate system is obtained: ; parameterizing the formula obtained in the step S103 to enable , , The method can obtain: ; Wherein, the , ; Thus, the parameterized visual servo system model construction is completed.
- 3. The monocular hand-eye robot vision servo method of three-dimensional space dynamic depth estimation according to claim 1, wherein the step S3 specifically comprises the steps of: S301, acquiring pixel coordinates of target feature points through a camera And acquiring feature point coordinates under a camera coordinate system by using the estimated depth information: ; s302, converting the characteristic point coordinates in the camera coordinate system into the characteristic point coordinates in the robot base coordinate system by utilizing each conversion matrix: ; S303, in order to enable the tail end of the robot to reach the position above the target characteristic point and move along with the target characteristic point, an error is defined by combining the estimated value of the characteristic point coordinates under the base coordinate system with the tail end gesture of the robot: ; Wherein the method comprises the steps of The pose of the target feature point under the robot base coordinate system is unchanged with time because the target feature point is assumed to have translational motion only under the base coordinate system; the method is that the pose of the tail end of the robot is under a robot base coordinate system, and the time derivative is taken for the error: ; S304, design of the controller is based on Freedom robot dynamics model: ; Wherein, the Respectively represent the joint displacement, the speed and the acceleration of the robot, Is the moment of inertia, Is a centripetal moment of force, Is the moment of gravity and the moment of force, Is the control moment of the robot; S305, adopting a PD controller, wherein the controller is designed as follows: ; Wherein, the diagonal array It is the control gain that is used to control the gain, The above-defined pose error and velocity error, respectively.
Description
Visual servo method for monocular hand-eye robot with three-dimensional space dynamic depth estimation Technical Field The invention belongs to the technical field of hand-eye robot control, and particularly relates to a visual servo method of a monocular hand-eye robot for three-dimensional space dynamic depth estimation. Background The visual servo technology is an important means for robot control, and can control the robot to realize various works, such as positioning assembly, tracking butt joint, grabbing placement and the like, by using visual information as feedback. In the robot vision servo system, two-dimensional pixel coordinates of the target feature point in the image space are generally acquired by a camera, and further mapped to three-dimensional space coordinates under a robot base coordinate system by projection transformation. In the coordinate transformation process, the conversion relation between the coordinate systems can be obtained in advance based on the calibration results of the internal reference and the external reference of the camera and the positive kinematic model of the robot. However, the depth information of the camera relative to the target object changes dynamically over time and cannot be obtained through a priori knowledge or static calibration. For this purpose, some researchers propose to construct a target motion observer, to estimate the motion parameters of the target in the base coordinate system in real time in combination with visual feedback information, and to further substitute the depth model to estimate the depth value that varies with time. There have been other studies attempting to fuse the vision system with other types of sensors or to recover depth information from images using the principle of triangulation using binocular vision schemes. However, the introduction of additional sensors has a number of drawbacks in practical applications, including increased system cost, increased structural complexity, reduced overall reliability, and a significant computational burden. In view of the above-mentioned engineering implementation-level limitations, recent studies have focused on visual servoing strategies that rely solely on monocular cameras, and compensating for the lack of depth information by means of analytical algorithms. Aiming at image feature points acquired by a monocular vision system, the existing research introduces a self-adaptive estimation mechanism to solve the problem that the time-varying depth of the feature points in the visual servo process is unknown. However, the conventional adaptive estimation method is only suitable for depth variation caused by a single dynamic scene of "stationary target and robot motion", and is difficult to cover a dual dynamic scene formed by "simultaneous motion of target and robot". For this reason, it is necessary to develop a visual servoing method for a monocular hand-eye robot capable of three-dimensional spatial dynamic depth estimation that solves the above-mentioned problems. Disclosure of Invention In order to solve the problems that the traditional method is limited in scene and can not estimate the time-varying depth, the invention aims to provide a monocular hand-eye robot vision servo method for estimating the dynamic depth in a three-dimensional space, and the invention establishes a unified parameterized model of a robot, a camera and a target, the depth change caused by the movement of the target and the movement of the robot is estimated under the same frame, so that the invention can effectively cope with the more common and complex working condition that both the robot and the target move. The object of the invention is achieved in that it comprises the following steps: S1, establishing a parameterized visual servo system model, namely performing unified parameterized modeling on a closed-loop system formed by a robot, a camera and a target according to a forward kinematics model of the hand-eye robot, internal parameters, external parameters of the camera and an image jacobian matrix; S2, constructing an estimated error driving type adaptive law, namely, aiming at the system model obtained in the step S1, introducing an auxiliary filtering variable to reconstruct a measurable signal, extracting estimated error information related to unknown depth parameters, designing an adaptive law meeting the Lyapunov convergence condition based on the estimated error information, carrying out online real-time estimation on a time-varying depth to obtain a depth estimated value, and guaranteeing convergence of an estimated error in a provable manner; S3, performing visual servo control based on the position, namely back projecting the image plane characteristic points to a robot base coordinate system by utilizing the depth estimated value obtained in real time in the step S2 to obtain an estimated value of a target three-dimensional position, defining a tracking error between the target three-d