Search

CN-117584125-B - Micro-assembly system and method for active-passive composite micro-vision guided robot

CN117584125BCN 117584125 BCN117584125 BCN 117584125BCN-117584125-B

Abstract

The invention discloses a micro-assembly system and a micro-assembly method for an active-passive composite micro-vision guiding robot, wherein the micro-assembly system comprises a DLP projector, a high-resolution industrial camera, a common industrial camera, a double-rate telecentric lens, a common telecentric lens, a precise positioning robot, a working platform, an operating robot, an end effector, a common positioning sliding table, an imaging system supporting device and the like, and the coarse positioning of a target on a plane is realized through the high-resolution industrial camera, so that the precise positioning robot is guided to move the target to the center of the plane; the method comprises the steps of projecting coding patterns through a DLP projector, triggering a common industrial camera to acquire images containing coding information, carrying out decoding algorithm processing on the images containing the coding information, combining calibration information, utilizing a three-dimensional reconstruction algorithm to acquire three-dimensional point clouds of targets, carrying out preprocessing and space pose estimation on point cloud data, realizing high-precision three-dimensional visual perception, and guiding an end effector at the tail end of an operation robot to accurately complete a micro-assembly task through a kinematic inverse solution result.

Inventors

  • LI HAI
  • LIAO ZHU
  • ZHONG YUTAO
  • Zeng Tingjun
  • ZHANG LI
  • CAI WEIBIN
  • ZHANG XIANMIN

Assignees

  • 华南理工大学

Dates

Publication Date
20260508
Application Date
20231130

Claims (6)

  1. 1. The micro-assembly system of the active-passive composite micro-vision guiding robot is characterized by comprising a DLP projector (1), a high-resolution industrial camera (2), a common industrial camera (3), a double-rate telecentric lens (4), a common telecentric lens (5), a precise positioning robot (6), a working platform (7), an operating robot (8), an end effector (9), a common positioning sliding table (10) and an imaging system supporting device (11); The imaging system supporting device (11) is of a beam structure and comprises a first beam and a second beam, the first beam is arranged on the front side of the second beam, the left side and the right side of the two beams are respectively connected with a common positioning sliding table (10), rails are arranged on the two beams, the two beams are arranged in parallel and are mutually independent, a double-rate telecentric lens (4) and a common industrial camera (3) fixed on the two beams can realize up-down and front-back reciprocating translational movement through the common positioning sliding table (10) respectively, the relative position relation of the two beams is freely adjusted, the imaging system supporting device (11) is connected with the common positioning sliding table (10), the imaging system supporting device (11) is arranged above a working platform (7), the double-rate telecentric lens (4) is arranged on the imaging system supporting device (11) and is positioned in the middle of the imaging system supporting device (11), the common industrial camera (3) is arranged on the left side and the right side of the double-rate telecentric lens (4), the DLP projector (1) and the high-resolution industrial camera (2) are respectively connected with a first interface of the double-rate telecentric lens (4), a second interface of the double-rate telecentric lens (5) is connected with the common telecentric lens (5) and is positioned on the common telecentric lens (5), the operation robot (8) is arranged at one side of the precise positioning robot (6), and the end effector (9) is arranged at the tail end of the operation robot (8) and is positioned in a working space on the working platform (7); The double-rate telecentric lens (4) is fixed on an imaging system supporting device (11) for a vertical optical axis, and up-down and front-back reciprocating translational movement is realized through a common positioning sliding table (10); The common industrial camera (3) is fixed on an imaging system supporting device (11) for a crossed optical axis, and up-down and front-back reciprocating translational movement is realized through a common positioning sliding table (10); The common telecentric lens (5) is connected with the common industrial camera (3) and is positioned at the left side and the right side of the double-magnification telecentric lens (4), and the optical axis of the common telecentric lens (5) and the optical axis of the double-magnification telecentric lens (4) are positioned on the same plane; The double-rate telecentric lens (4) comprises a telecentric objective lens group (401), a beam splitting prism (402) and a collimating lens (403), incident light firstly passes through the telecentric objective lens group (401) after entering the double-rate telecentric lens (4), and then is split into two beams by the beam splitting prism (402), wherein one beam directly penetrates through the beam splitting prism (402) and then passes through the collimating lens (403) to be directed to a first interface in the vertical direction of the double-rate telecentric lens (4), and the other beam is reflected by the beam splitting prism (402) and then passes through the collimating lens (403) to be directed to a second interface in the horizontal direction of the double-rate telecentric lens (4); The method for realizing the active-passive composite micro-vision guiding robot micro-assembly system comprises the following steps: S1, constructing and calibrating an active-passive composite micro-vision guiding robot micro-assembly system; s2, adjusting a common positioning sliding table (10) to enable the DLP projector (1), the high-resolution industrial camera (2) and the common industrial camera (3) to clearly project and image a target at a working platform (7) in a working range; S3, shooting an image of a working platform (7) by using a high-resolution industrial camera (2), realizing coarse positioning of a target on a plane, and guiding a precise positioning robot (6) to move the target to an image center; S4, projecting the coding pattern by using a DLP projector (1), and triggering a common industrial camera (3) to acquire an image containing coding pattern information; s5, decoding the image containing the coding information, and acquiring a three-dimensional point cloud of the target by utilizing a three-dimensional reconstruction algorithm in combination with the calibration information in the step S1; S6, performing point cloud data preprocessing and space pose estimation on the three-dimensional point cloud of the target, performing kinematic inverse solution, and inputting a result to an operation robot (8), so as to guide an end effector to finish corresponding operation of the target; S7, repeating the step S4 to the step S6 until the whole micro-assembly task is completed.
  2. 2. The micro-assembly system of the active-passive composite micro-vision guided robot of claim 1, wherein the DLP projector (1) is connected with a first interface in the vertical direction of the double-rate telecentric lens (4) and is used for projecting patterns with coded information; The high-resolution industrial camera (2) is connected with a second interface in the horizontal direction of the double-rate telecentric lens (4) and is used for shooting images at the working platform; The DLP projector (1) and the high resolution industrial camera (2) share a telecentric objective lens group (401) through a double-rate telecentric lens (4).
  3. 3. The micro-assembly system of the active-passive composite micro-vision guided robot of claim 1, wherein the optical structures of the common telecentric lens (5) and the double-rate telecentric lens (4) are object-side telecentric or double-side telecentric.
  4. 4. The micro-assembly system of the active-passive composite micro-vision guiding robot of claim 1, wherein the precise positioning robot (6) is composed of multiple shafts, and each shaft moves in a rotary or translational mode and is independent from each other.
  5. 5. The micro-assembly system of the active-passive composite micro-vision guided robot of claim 1, wherein the manipulator robot (8) has multiple degrees of freedom and is freely selected according to different requirements.
  6. 6. The micro-assembly system of the active-passive composite micro-vision guided robot of claim 1, wherein the end effector (9) is detachable and is replaced according to different requirements.

Description

Micro-assembly system and method for active-passive composite micro-vision guided robot Technical Field The invention belongs to the technical field of micro-assembly, and particularly relates to an active-passive composite micro-vision guiding robot micro-assembly system and method. Background As an important ring in the production and manufacture of high-end microelectromechanical products, micro-assembly is a key point for guaranteeing the overall quality and performance of the products. Compared with the manual assembly method assisted by the traditional microscope, the automatic micro-assembly method adopting the micro-vision guiding positioning and operating robot has the advantages of good consistency, high reliability and the like. At present, the most important ring in the automatic micro-assembly technology is how to realize effective perception of three-dimensional information of a key target space in micro-assembly, and only when accurate three-dimensional micro-vision perception is realized, the guided positioning and operation robot can be ensured to be accurate and free of errors when the micro-assembly task is completed. It is noted that most current micro-vision guided robotic microassembly studies are focused in either planar or linear three-dimensional space. For the micro-object space assembly with the complicated shape and the posture adjustment, the current main solution is to disassemble the assembly process through manual intervention, so that the complicated assembly positioning task is converted into a multi-step task (Agnus J,Chaillet N,Clévy C,et al.Robotic microassembly and micromanipulation at FEMTO-ST.Journal of Micro-Bio Robotics,2013,8(2):91-106). which is serially executed by a single robot, and the solution not only needs the participation of experienced personnel, but also reduces the efficiency and even affects the final assembly effect. Clearly, how to use the micro-vision three-dimensional space to sense, guide, position and operate the robot to cooperatively complete the general micro-assembly task of the space is a challenge in the development of the current micro-assembly technology. Because of the greater depth of field under the same magnification compared with the traditional optical microscope, telecentric lenses based on parallel light path designs are increasingly used in the field of micro-vision three-dimensional perception in recent years. When the telecentric lens images at a micro scale, the intensity of natural light is low, the imaging quality of micro-vision images is generally poor, and especially the micro-object surface detail features are lacking, and an artificial visible light source is generally required to be configured, but the complexity of hardware arrangement is increased, and the cost is high and the flexibility is poor. In the field of three-dimensional perception of visual space, visual perception can be divided into two types, namely active visual and passive visual, according to whether a special coded light source is needed in the perception process, wherein the active visual comprises the coded light source, and the passive visual otherwise. As for the former, the most common method is a structured light method based on DLP (DIGITAL LIGHT Processing) technology, and the basic principle is that a pattern with coding information is actively projected onto a scene, and a camera is used for observing how the projected pattern interacts with the surface of a target object to perceive depth, so that the method has the advantages of insensitivity to the texture of the object, high precision and the like. Meanwhile, the special coded light source can replace an additional artificial visible light source required by a telecentric lens, so that the cost and complexity of system hardware are reduced. However, due to the arrangement of projection and imaging light paths and the complexity of the micro-assembly targets and tasks, problems such as shielding are easy to occur. For the latter, the most classical method is a binocular/multi-view stereoscopic vision method, and the basic principle is that the spatial three-dimensional coordinates of corresponding points are solved through the stereoscopic parallax of the matched characteristic points in multiple view angles and the calibrated epipolar geometric constraint relation, and the spatial pose estimation or three-dimensional morphology perception of corresponding targets is realized through further processing the three-dimensional characteristic point coordinate information obtained through stereoscopic vision measurement. The problems of shielding and the like can be effectively overcome through visual perception of a plurality of visual angles, but due to insufficient surface detail characteristics of a target object, the matching of characteristic points is easy to be interfered, the three-dimensional perception effect is seriously affected, and the mechanical complexity of the system is f