CN-116652962-B - Object grabbing method based on visual feedback control
Abstract
The invention discloses an object grabbing method based on visual feedback control, which belongs to the technical field of robot control and comprises the following steps of S1, target object position and posture information acquisition, S2, mechanical arm path planning and S3, target object grabbing. According to the invention, the Kinect visual sensor is used for acquiring the position and posture information of the target object in real time, feeding back the information to the controller, and then controlling the mechanical arm of the robot, so that the stable grabbing of the target object is realized, the target object is transferred to a destination, and the grabbing capacity and the intelligent degree of the robot in a complex environment are improved.
Inventors
- WANG JING
- ZHAO YANGXIN
- HUANG HE
- SHEN HAO
- FANG TIAN
- SU LEI
Assignees
- 安徽工业大学
Dates
- Publication Date
- 20260505
- Application Date
- 20230711
Claims (7)
- 1. An object grabbing method based on visual feedback control is characterized by comprising the following steps of: s1, acquiring position and posture information of target object Installing a vision camera on a robot, acquiring point cloud data by using a vision sensor, preprocessing the point cloud data, searching a plane by using a PCL algorithm based on the point cloud data, giving the point cloud data to the plane, converting the point cloud data in a three-dimensional space into the point cloud data on the plane, using a normal vector after plane fitting and points on the plane to identify the position of a target object, acquiring the center point coordinate of the target object, namely the position information of the target object, and calculating the posture information of the target object according to the center point coordinate of the target object; S2, mechanical arm path planning Carrying out Cartesian path planning on a mechanical arm of a robot, calculating the joint angle of the mechanical arm by using inverse kinematics to enable the tail end of the mechanical arm to reach the position of a target object, and then controlling the movement of an end effector of the mechanical arm by using a PID controller to enable the end effector of the mechanical arm to move to the target object; s3, grabbing target object After the path planning of the mechanical arm is completed, the mechanical arm of the robot is utilized to grasp the target object, the gesture of the mechanical arm end effector is firstly determined, namely the mechanical arm end effector is controlled to be opened, the target object position information obtained in the step S1 is utilized to move the mechanical arm end effector to the grasping position of the target object, when the mechanical arm end effector reaches the grasping position, the action of the mechanical arm end effector for grasping the target object is controlled according to the gesture of the mechanical arm end effector and the gesture information of the target object, the mechanical arm end effector is then moved to the target position, and finally, the target object is put down, namely the mechanical arm end effector is controlled to be opened, the target object is released from the mechanical arm end effector, and then the mechanical arm is returned to the initial gesture, so that the target object grasping task is completed; in the step S1, the specific process of converting the point cloud data in the three-dimensional space into the point cloud data on the plane is as follows: s111, selecting any point in the point cloud data set S, and setting a selection point Then Can be expressed as Normal vector of plane where target object is located And a point on the plane Expressed in homogeneous form And ; S112, for each point in the point cloud data, representing the point as homogeneous coordinate form Point of the dot Along the normal vector Projected onto a plane to obtain a point Coordinates of the projected point on the plane: ; Wherein, the , ; S113 obtaining non-homogeneous coordinate form Then, forming a point cloud data set on a plane by using non-homogeneous coordinate forms of all projection points, so as to convert the point cloud data in the three-dimensional space into the point cloud data on the plane; In the step S1, the specific process of calculating the coordinates of the center point of the target object is as follows: s121, extracting point coordinates of a target object through point cloud data processing Initializing a cluster center Randomly select The individual points are used as initial clustering center points; S122, calculating each point To each cluster center point Distance of (2) The formula is as follows: ; s122, calculating the distance Point of the dot Dividing the cluster into clusters where the cluster center points closest to the cluster center points are located, calculating the average value of all coordinates in the clusters, and obtaining the center coordinates of the clusters : ; ; ; Wherein, the Representing clusters The number of points in (a); S123, repeatedly calculating 、 Returning the central point coordinate of the cluster to serve as the central point coordinate of the target object until no change occurs any more; in the step S1, the specific process of calculating the posture information of the target object according to the coordinates of the center point of the target object is as follows: S131, decomposing the eigenvalue of the covariance matrix C to obtain eigenvalues as And the corresponding feature vector is ; S132, dot-by-dot Projection onto feature vectors On the formed plane, two-dimensional point cloud data are obtained Wherein ; S133 principal direction vector on plane according to the nature of PCA algorithm And normal vector Orthogonalizing, taking another vector on the plane As one component of the vector, the other dimension is 0, thereby obtaining a vector in three-dimensional space: ; Wherein, the Is the main direction vector calculated by the PCA algorithm; s134, will be The vector rotates back into the original coordinate system, wherein, a rotation matrix is set as R, and the matrix is used for the base vector on the plane Rotation back to the basis vector in the original coordinate system Then R is obtained as: ; s135, according to the center point coordinate sum of the target object The vector calculates the attitude information of the target object in the following way: s1351, setting the center point coordinates of the target object as The attitude vector is The following steps are: ; Wherein, represents the product of the vector and the matrix; s1352, according to the properties of the vector, the orientation vector of the target object is obtained and expressed as: ; Wherein, the As the orientation vector of the target object, The coordinates of the center point of the target object; and further obtaining the attitude information of the target object.
- 2. The method for capturing objects based on visual feedback control as set forth in claim 1, wherein the preprocessing of the point cloud data in step S1 includes downsampling, filtering, and outlier removal.
- 3. The method for capturing objects based on visual feedback control according to claim 1, wherein in said step S1, the plane is expressed as a normal vector And a point Wherein the normal vector Is vertical to the plane and is perpendicular to the plane, Is any point on the plane.
- 4. The method for capturing objects based on visual feedback control according to claim 3, wherein the normal vector The specific solving process of (2) is as follows: s101, assume that there is a point cloud data set s= { including N points Each point of }, where Can be expressed as In the form of (a), the points in the point cloud data set S are regarded as vectors in three-dimensional space, i.e ; S102, calculating a covariance matrix C of the point cloud data set S, wherein the formula is as follows: ; Wherein, the Is the average of all points, namely: ; s103, finding the eigenvectors and the corresponding eigenvalues of the covariance matrix C, sorting the eigenvectors from large to small according to the corresponding eigenvalues, and then selecting the smallest eigenvalues Individual feature vectors, using Computing a method vector from the feature vectors Normal vector The following are provided: 。
- 5. The method for capturing objects based on visual feedback control according to claim 4, wherein in said step S2, the specific processing procedure is as follows: s21, fitting the position, the speed and the acceleration of the mechanical arm by using cubic spline interpolation so as to generate continuous second-order polynomials between adjacent data points; s22, controlling the movement of the mechanical arm by using a PID controller, wherein the PID controller calculates control output by comparing actual output with expected output, takes the position, the speed and the acceleration of the mechanical arm as expected output, sets the actual output as the current state of the mechanical arm, then calculates an error signal and uses the PID controller to calculate the control output; And S23, sending a control output to a PID controller of the mechanical arm to execute the expected motion, so that the tail end of the mechanical arm actuator moves to the target object.
- 6. The method for capturing objects based on visual feedback control according to claim 5, wherein in said step S21, spline interpolation functions are calculated using the following formula: ; Wherein, the Is at And A spline interpolation function between the two, Is the first The location of the data points is such that, Is the first The velocity of the data points is such that, Is the first The acceleration of the data points is such that, Is the first And calculating the position, the speed and the acceleration of the mechanical arm at any time point through a spline interpolation function according to the speed change rate of the data points.
- 7. The method for capturing objects based on visual feedback control according to claim 5, wherein in said step S22, the control output is calculated using the following formula: ; Wherein, the Is the control output of the control unit, 、 And Is a parameter of the PID controller and, Is an error signal which is used to determine the error, Is the rate of change of the error signal, Is the integral of the error signal.
Description
Object grabbing method based on visual feedback control Technical Field The invention relates to the technical field of robot control, in particular to an object grabbing method based on visual feedback control. Background In recent decades, as the automation technology is continuously developed in industry, various types of robots have wide application in various fields, such as the production service field, the manufacturing industry, etc., and in the field of robot arms, object gripping and visual recognition detection of the robots are key research objects. However, in practical applications, problems still exist in visual recognition and path planning of a robot, such as a grabbing strategy of a mechanical arm of the robot grabbing different objects, accuracy of visual recognition on targets, and the like, and the problems need to be solved. Disclosure of Invention The invention aims to solve the technical problem of realizing accurate identification of the position and the gesture of a target object, further realizing accurate path planning, improving the grabbing capacity and the intelligent degree of a robot in a complex environment, and providing an object grabbing method based on visual feedback control. The invention solves the technical problems through the following technical proposal, and the invention comprises the following steps: s1, acquiring position and posture information of target object Installing a vision camera on a robot, acquiring point cloud data by using a vision sensor, preprocessing the point cloud data, searching a plane by using a PCL algorithm based on the point cloud data, giving the point cloud data to the plane, converting the point cloud data in a three-dimensional space into the point cloud data on the plane, using a normal vector after plane fitting and points on the plane to identify the position of a target object, acquiring the center point coordinate of the target object, namely the position information of the target object, and calculating the posture information of the target object according to the center point coordinate of the target object; S2, mechanical arm path planning Carrying out Cartesian path planning on a mechanical arm of a robot, calculating the joint angle of the mechanical arm by using inverse kinematics to enable the tail end of the mechanical arm to reach the position of a target object, and then controlling the movement of an end effector of the mechanical arm by using a PID controller to enable the end effector of the mechanical arm to move to the target object; s3, grabbing target object After the path planning of the mechanical arm is completed, the mechanical arm of the robot is utilized to grasp the target object, the attitude of the mechanical arm end effector is firstly determined, namely the mechanical arm end effector is controlled to be opened, the target object position information obtained in the step S1 is utilized to move the mechanical arm end effector to the grasping position of the target object, when the mechanical arm end effector reaches the grasping position, the action of the mechanical arm end effector for grasping the target object is controlled according to the attitude of the mechanical arm end effector and the attitude information of the target object, the target object is grasped, then the mechanical arm end effector is moved to the target position, finally, the target object is put down, namely the mechanical arm end effector is controlled to be opened, the target object is released from the mechanical arm end effector, and then the mechanical arm is returned to the initial attitude, so that the target object grasping task is completed. Further, in the step S1, the robot includes two symmetrical mechanical arms, each having 7 independently rotating motion joints, and the rotation angle, rotation speed and moment parameters of each joint are known. Still further, in the step S1, the method of preprocessing the point cloud data includes downsampling, filtering, and removing outliers. Further, in the step S1, the plane is expressed as a normal vectorAnd a point P 0, wherein the normal vectorPerpendicular to the plane, P 0 is any point on the plane. Further, normal vectorThe specific solving process of (2) is as follows: S101 assuming a point cloud dataset s= { P 1,P2,...,PN }, where each point P i can be represented in the form of (x i,yi,zi), regarding the points in the point cloud dataset S as vectors in three-dimensional space, i.e. S102, calculating a covariance matrix C of the point cloud data set S, wherein the formula is as follows: Wherein, the Is the average of all points, namely: S103, finding the eigenvectors and the corresponding eigenvalues of the covariance matrix C, sorting the eigenvectors from large to small according to the corresponding eigenvalues, then selecting the minimum k eigenvectors, and calculating the eigenvector by using the k eigenvectors Normal vectorThe following are provided: Furthe