Search

CN-119115935-B - Composite robot hand-eye calibration and workpiece positioning device and method based on vision

CN119115935BCN 119115935 BCN119115935 BCN 119115935BCN-119115935-B

Abstract

The invention discloses a vision-based composite robot hand-eye calibration and workpiece positioning device and method, wherein the method comprises the following steps of S1, respectively calibrating cameras and terminal tool TCP according to a preset operation height; the method comprises the steps of S2, controlling the mechanical arm to reach a calibration position, collecting image data of hand-eye calibration, controlling the tail end of the mechanical arm to reach the characteristic point position of a checkerboard calibration plate, collecting characteristic point data of hand-eye calibration, carrying out hand-eye calibration parameter calculation based on the image data and the characteristic point data, and S3, teaching the mechanical arm to a workpiece grabbing pose, and respectively collecting checkerboard images before and after teaching and Aruco two-dimensional code images by an in-box camera to carry out station calibration parameter calculation. The invention finishes hand-eye calibration in a simpler and faster way, solves the problem of reduced precision of grabbing workpieces guided by vision caused by chassis movement, and performs post-treatment on the grabbing pose of the subsequent workpieces.

Inventors

  • GAO XINGYU
  • TANG GUOHAO
  • NING LIHUA
  • HUANG YANG
  • Xie Qiadong
  • LI YU
  • MAO YIKAI
  • WANG XUHAN

Assignees

  • 桂林电子科技大学
  • 深圳市科灵机器人科技有限公司
  • 南宁桂电电子科技研究院有限公司

Dates

Publication Date
20260512
Application Date
20240910

Claims (5)

  1. 1. A vision-based compound robot hand-eye calibration and workpiece positioning method, the method comprising: step S1, respectively calibrating cameras of the terminal camera and the internal camera of the box according to a preset operation height and calibrating the terminal of a TCP tool; s2, controlling the mechanical arm to reach a calibration position, collecting image data of hand-eye calibration, controlling the tail end of the TCP tool to reach the characteristic point position of the checkerboard calibration plate, collecting characteristic point data of hand-eye calibration, and calculating hand-eye calibration parameters based on the image data and the characteristic point data; step S3, teaching the mechanical arm to the workpiece grabbing pose, and respectively collecting checkerboard images before and after teaching and Aruco two-dimensional code images by an in-box camera to perform station calibration parameter calculation; In the step S2, the mechanical arm is controlled to reach a calibration position to collect image data of hand-eye calibration, the tail end of the TCP tool is controlled to reach the characteristic point position of the checkerboard calibration plate to collect characteristic point data of hand-eye calibration, and the method for calculating hand-eye calibration parameters based on the image data and the characteristic point data comprises the following steps: s21, moving the compound robot to a working point, controlling the tail end of the mechanical arm to reach the position of the camera calibration height, adjusting the tail end gesture to be a vertical shooting calibration plate, recording the current mechanical arm gesture as a hand-eye calibration gesture, and collecting the image of the checkerboard calibration plate under the hand-eye calibration gesture; Step S22, controlling the tail end of the TCP tool to reach the corner feature point positions of the checkerboard calibration plates, sequentially stamping the feature point positions of the preset m checkerboard calibration plates by using the tail end of the needle point according to the clockwise sequence, and sequentially recording m space coordinates under the mechanical arm base coordinate system in the stamping process ; S23, performing hand-eye calibration parameter calculation according to the acquired image of the checkerboard calibration plate and the characteristic point data of the checkerboard calibration plate; in step S23, the method for calculating the hand-eye calibration parameters according to the acquired image and the characteristic point data of the checkerboard calibration plate includes: Detecting corner points of the image data of the checkerboard calibration plate to obtain pixel coordinates of n corner points, namely ; Presetting the upper left corner of the calibration plate as the origin of the coordinate system of the calibration plate, and constructing the world coordinates of all the corner points according to the known size and quantity of the checkers, namely ; Performing PnP algorithm calculation on an internal reference matrix K obtained by calibrating parallel cameras with pixel coordinates and world coordinates of all angular points in one-to-one correspondence to obtain a calibration plate coordinate system under the hand eye calibration posture Relative to the camera coordinate system Lower rigid transformation matrix The expression is: ; presetting m corner points on a checkerboard calibration plate as characteristic points, and presetting coordinates of the m corner points in a checkerboard calibration plate coordinate system ; According to a rigid transformation matrix Coordinate of m angular points in checkerboard calibration plate coordinate system Turning to a camera coordinate system to obtain ; According to the obtained TCP tool end pose data when the TCP tool end sequentially stamps the checkerboard feature points, obtaining a point set ; Based on Sum point set The constructed matrix relationship is as follows: ; Solving the matrix relation by utilizing SVD eigenvalue decomposition method to obtain the required hand-eye matrix , The SVD method is utilized to perform the needed hand-eye matrix Performing UV eigenvalue decomposition to obtain rotation matrix : ; Substituting the central point of the point set into calculation to obtain the position The expression is: , wherein, Is the centroid point of the feature point in the coordinate system of the calibration plate, Is the centroid point of the feature point in the camera coordinate system.
  2. 2. The vision-based compound robot hand-eye calibration and workpiece positioning method according to claim 1, wherein in the step S3, the robot arm is taught to the workpiece gripping pose, and the in-box camera respectively collects checkerboard and Aruco two-dimensional code images before and after teaching, and the method for performing station calibration parameter calculation comprises the following steps: s31, controlling the tail end of a TCP tool to return to the hand-eye mark positioning pose under the condition that the position of a chassis of the compound robot is unchanged, and respectively acquiring checkerboard calibration plate images by a tail end camera and an in-box camera; s32, teaching and controlling the mechanical arm to reach the upper part of the object, adjusting the pose of the tail end according to the reasonable grabbing pose, and shooting Aruco two-dimensional codes of the tail end of the mechanical arm by the internal camera in the box; And step 33, calculating station calibration parameters according to the checkerboard calibration plate image and the TCP tool end pose data when the mechanical arm moves to the station pose, which are acquired by the end camera and the in-box camera respectively, and the Aruco two-dimensional code image.
  3. 3. The vision-based composite robot hand-eye calibration and workpiece positioning method according to claim 2, wherein in the step S33, the method for performing the station calibration parameter calculation according to the TCP tool end pose data and Aruco two-dimensional code image when the end camera and the in-box camera respectively acquire the checkerboard calibration plate image and the mechanical arm move to the station pose comprises: PnP calculation is carried out on the checkerboard images acquired by the end camera and the in-box camera under the hand-eye gesture, and a checkerboard calibration plate is obtained Rotation matrix relative to two coordinate systems of end camera and in-box camera And Wherein Is constant; Performing PnP (binary arithmetic) calculation on Aruco two-dimensional code images shot by the in-box camera under the station calibration posture to obtain a transformation matrix between a Aruco coordinate system and the in-box camera coordinate system Synchronously acquiring the pose of the tip of the tail end of the TCP tool, wherein the pose is characterized by a station coordinate system Relative to the robotic arm base coordinate system Rigid transformation of (2) ; Adjusting the mechanical arm to the hand-eye calibration gesture, and solving a hand-eye matrix Repeated adaptation and based on rotation matrices And Transformation matrix Rigid transformation The coordinate system conversion relation under the station standard pose is obtained as follows: , wherein, Is a constant rigid transformation matrix which is used for transforming the data, And the Aruco coordinate system calibrated for the station is relative to the standard pose of the camera in the box.
  4. 4. The vision-based composite robotic eye marking and workpiece positioning method of claim 1, wherein the method is implemented in accordance with a vision-based composite robotic eye marking and workpiece positioning device comprising a composite robot and a vision gripper; the composite robot comprises a moving chassis and a mechanical arm, wherein a mechanical arm base is fixed on the moving chassis and moves along with the moving chassis; The visual grabber comprises a TCP tool tail end, a tail end clamping jaw, aruco two-dimension codes, a tail end connecting piece and a tail end camera, wherein the tail end camera, the TCP tool tail end and the tail end connecting piece are all in bolt connection, and the Aruco two-dimension codes are fixed on a tail end connecting piece backboard in a selective sticking mode; the device also comprises a protective box, a camera in the box, a workpiece carrying box and a checkerboard calibration plate; The protective box is a shell of a workbench at a working point and is fixedly connected with the workbench, the internal phase of the box is positioned at the top end of the interior of the protective box and is fixedly connected with the protective box through bolts, the checkerboard calibration plate is fixedly arranged at the edge of the workbench, and the workpiece carrying boxes are sequentially arranged and arranged at the center of the workbench.
  5. 5. The vision-based composite robot hand-eye calibration and workpiece positioning method according to claim 1, wherein the vision gripper is connected with the end flange of the mechanical arm through an L-shaped mounting piece, the end camera is mounted at one end of the L-shaped mounting piece through a bolt connection, the end clamping jaw is mounted along the perpendicular line of the end flange surface of the mechanical arm, and the TCP tool end is arranged between the end clamping jaw and the end camera.

Description

Composite robot hand-eye calibration and workpiece positioning device and method based on vision Technical Field The invention belongs to the field of vision positioning and industrial robots, in particular to a hand-eye calibration and vision workpiece positioning technology, and particularly relates to a vision-based composite robot hand-eye calibration and workpiece positioning device and method. Background At present, under the premise of considering flexibility, the hand-eye calibration technology of the compound robot generally adopts an eye-on-hand vision scheme, namely a camera sensor moves along with the operation of the robot, if the hand-eye calibration and station positioning processes of an ordinary fixed base robot are considered, a mechanical arm is usually required to be moved to different poses, image data of a high-precision calibration plate and gesture data of the mechanical arm are synchronously acquired, so that calibration parameter data of the hand-eye pose are settled, a working area is required to be arranged for subsequent workpiece positioning, the working area is fixed with the position of the calibration plate, the actual three-dimensional offset of the working area and the calibration plate, namely station calibration, in actual work, only the image of the calibration plate is required to be shot again, and the known station calibration offset is added according to the transformation of the hand-eye and the mechanical arm matrix, so that the robot can accurately grasp the workpiece. For the compound robot, if the hand-eye calibration is performed by taking over the sampling pose at the working point, the tail end of the mechanical arm needs to be adjusted to be a plurality of groups of different poses at the working point, which requires a larger working space, but in the real production working condition, the calibration plate needs to be configured in a narrow closed environment, the multiple sampling poses can collide with the case protection wall for storing the workpiece, and meanwhile, the multiple sampling pose calibration method is often time-consuming and high in complexity. Meanwhile, the composite robot needs one more step before actually positioning the vision station, namely, the moving chassis is required to be navigated to a working point firstly, then the grabbing task can be executed, the situation that the moving chassis reaches the working point is unstable due to the influence of radar or sensor errors and environmental factors such as uneven ground of the working point is caused, the position and the posture quantity of the moving chassis are different from those of the moving chassis when the moving chassis reaches the working point, the traditional workpiece positioning mode is to read TCP offset before and after teaching, only the translation quantity of a three-dimensional space is considered, the rotation quantity is not considered, if the offset is added according to the posture when the moving chassis is calibrated, the calculated position and the actual workpiece position are ignored, the grabbing precision of the mechanical arm is greatly reduced, meanwhile, the situation that the mechanical arm reaches the working point is not stable due to the fact that the mechanical arm reaches the ground of the actual working point is considered, the body offset can not be introduced into the mechanical arm again, the accidental error problem is solved, the situation that the mechanical arm reaches the default position is required to be accurately estimated, and the grabbing precision of the mechanical arm does not reach the default station is required to be accurately estimated. In the prior art, publication (announcement) number is CN 110842928A, publication (announcement) day is 2020.02.28, a composite robot vision guiding positioning device and method are related, station calibration parameters are TCP offset before and after teaching, only the position is considered, error quantity introduced by navigation offset cannot be automatically handled, therefore, various end gesture detection sensing devices are required to be added, hardware consumption and algorithm processing complexity are increased, publication (announcement) number is CN 115609591A, publication (announcement) day is 2023.01.17, a2 DMarker-based vision positioning method and system are related, station calibration method reaches station grabbing pose in a teaching mode, a rigid transformation matrix for establishing a station coordinate system relative to a Marker coordinate system is acquired according to a mode of reading TCP end pose, the matrix is subsequently used for grabbing work points, the position of the station coordinate system after repositioning can be accurately solved in theory, the composite robot can occasionally shake in the actual control mechanical arm motion process due to the environmental factors of the work points, the fact that the work points can not reach the ac