CN-121973202-A - Composite robot positioning method and system based on global and local distributed vision cooperation
Abstract
The invention discloses a global and local distributed vision cooperative composite robot positioning method which comprises the steps of obtaining pose information of a target object and a carrying mechanism under a global coordinate system by using a global camera, reversely solving a predicted pose of a mechanical arm based on the pose information and visual field parameters of the local camera, planning a collision-free path by using a pre-established global and local distributed vision cooperative positioning model based on the predicted pose and combining an environment safety envelope obtained by the global camera, controlling the mechanical arm to move to the predicted pose to finish visual field handover of a vision system, collecting images of the target object by using the local camera, and guiding the tail end of the mechanical arm to correct position deviation in real time by using a vision servo control method until the mechanical arm is positioned to a target operation position. The invention discloses a composite robot positioning method and system based on global and local distributed vision cooperation, which improve the operation beat and positioning robustness and realize the compatibility of a large visual field and high precision.
Inventors
- YAN HONGKUN
- CHANG HAONAN
- WEI JINYU
- HUA TIANYU
Assignees
- 苏州灵视视觉科技有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260128
Claims (10)
- 1. The composite robot positioning method based on global and local distributed vision cooperation is characterized by comprising the following steps of: S1, acquiring operation scene information and pose information of a target object and a carrying mechanism under a global coordinate system by using a global camera, and constructing an environment safety envelope; S2, reversely calculating the predicted pose of the mechanical arm based on the pose information and the view field parameters of the local camera, wherein the predicted pose comprises the tail end pose of the mechanical arm for enabling the target object to be located in the optimal imaging view field range of the local camera; Step S3, planning a collision-free path through a pre-established global and local distributed vision co-location model based on a predicted position and a global camera-acquired environment safety envelope, and controlling a mechanical arm to move to the predicted position based on the collision-free path to finish visual field handover of a vision system; And S4, acquiring an image of the target object by using a local camera, and guiding the tail end of the mechanical arm to correct the position deviation in real time by adopting a visual servo control method until the position deviation is positioned to the target operation position.
- 2. The method for locating a composite robot based on the cooperation of global and local distributed vision according to claim 1, wherein the predicting pose solving method in step S2 comprises the following steps: Establishing a local observation coordinate system with pose information of a target object as an origin; calculating an ideal pose of an end camera in the local camera under an observation coordinate system according to the optimal observation distance d opt and the optical axis direction constraint of the local camera; by means of a chain of coordinate transformations Converting the ideal pose into the end flange pose under the coordinate system of the mechanical arm base As the predicted pose; The pose transformation matrix from the basic coordinate system to the target object coordinate system Obj; the inverse of the end effector coordinate system to target object coordinate system transformation matrix.
- 3. The method for positioning the composite robot based on the global and local distributed vision cooperation of claim 2, wherein the field of view back-pushing comprises setting an ideal observation distance d opt and an ideal observation angle of the end camera; the calculation of the predicted pose includes the pose information of the target object On the basis, translate along the normal direction of the target Obtaining the theoretical pose which the tail end of the mechanical arm should reach ; ; The rough pose transformation matrix from the basic coordinate system to the target coordinate system O; An offset transformation matrix from the target coordinate system O to the end effector coordinate system E; The inverse kinematics solution and optimization comprises the following steps of If a plurality of groups of solutions exist, a group of solutions with the minimum joint movement amount and the no singular state are optimized by using a weighting function; The obstacle avoidance planning comprises planning a path from the current gesture to the current gesture by combining the environmental point cloud provided by the global camera and adopting a fast-expansion random tree algorithm Is provided.
- 4. The method for positioning a composite robot based on global and local distributed vision cooperation according to claim 2, wherein the visual servo control method in step S4 comprises: An image characteristic jacobian matrix is constructed, Establishing a mapping relation between the image characteristic errors and the tail end speed of the mechanical arm based on the image characteristic jacobian matrix; And generating a speed control instruction of the mechanical arm in real time by utilizing a proportional-integral-derivative control law or an adaptive gain control law so as to correct positioning errors caused by parking deviation of a conveying mechanism, uneven ground and global camera calibration residual errors.
- 5. The method for positioning the composite robot based on the global and local distributed vision cooperation of claim 4, wherein the field-of-view handover abnormality processing logic detects whether the local camera successfully identifies the target object feature after the mechanical arm reaches the predicted pose, switches to step S4 to perform local vision servo if the identification is successful, keeps the mechanical arm base stationary if the identification is failed, and controls the end effector to perform spiral search or zoom scanning in a preset neighborhood of the predicted pose until the target feature is captured.
- 6. The method for positioning a composite robot based on global and local distributed vision cooperation according to claim 5, wherein in step S1, the method further comprises the steps of calibrating and initializing the system in a system deployment stage; The system deployment comprises the steps of constructing a composite robot positioning system based on global and local distributed vision cooperation; The system based on deployment establishes a coordinate transformation chain of the whole system through system calibration and initialization, wherein the system calibration and initialization comprises the following steps: Determining pose matrix of global camera relative to world coordinate system W The method is used for realizing global camera calibration; Determining pose matrix of local camera relative to end flange E of mechanical arm The hand-eye calibration device is used for realizing hand-eye calibration; Determining rigid transformation relation of mechanical arm base B relative to conveying mechanism center A The method is used for realizing the joint calibration of the robot and the carrying mechanism.
- 7. The method for positioning the composite robot based on the global and local distributed vision cooperation of claim 6, wherein the acquisition of the pose information comprises global vision perception and rough positioning, and the method comprises the following steps of triggering a global vision system after a carrying mechanism is navigated to the vicinity of a working point and stopped; The global camera shoots a panoramic image containing a target object; Identifying a target object area by using a deep learning target detection algorithm or a two-dimensional code identification algorithm; Solving pose of target object coordinate system O relative to global camera Cg by PnP algorithm And uploading real-time odometer data by the carrying mechanism ; Converting the target pose into a robot base coordinate system B to obtain the target pose And as pose information: ; Wherein, the A pose transformation matrix from the world coordinate system W to the center A of the conveying mechanism; A pose transformation matrix of the mechanical arm base B relative to the center A of the conveying mechanism; a pose transformation matrix from the world coordinate system W to the global camera Cg; the pose transformation matrix from the target object coordinate system O to the global camera Cg.
- 8. The method for locating a composite robot based on the cooperation of global and local distributed vision according to claim 7, wherein the visual field interface of the vision system in the step S3 comprises the following steps: the robot controller drives the mechanical arm to rapidly move to a predicted position along the planned path; The terminal camera collects images and adopts a self-adaptive threshold segmentation and distortion correction algorithm to preprocess the images; identifying characteristic points in the image, and calculating accurate pose of target relative to terminal camera And realizing pose calculation: And enter a closed loop control stage to define the current image feature s and the expected feature Error of (2) Using image jacobian matrix Calculating camera speed rotation Wherein the method comprises the steps of In order to control the gain of the gain control, The visual servo control is realized for the pseudo inverse of the jacobian matrix; And the mechanical arm carries out real-time fine adjustment according to v c until the error e converges to a preset threshold value, and fine adjustment approximation is completed.
- 9. The method for positioning the composite robot based on the global and local distributed vision cooperation according to claim 8, wherein the method is applied to double-arm co-positioning and comprises the following steps of: controlling two local cameras of the double arms to observe visual features of different positions of a target object respectively; Mapping the observation data of two local cameras to the same world coordinate system, and constructing a multi-view joint optimization equation; The six-degree-of-freedom pose of the target object is resolved by minimizing gross projection errors to eliminate ambiguity in pose resolution at a single local view.
- 10. The composite robot high-precision positioning system based on the cooperation of global and local distributed vision is characterized in that the composite robot positioning method based on the cooperation of global and local distributed vision, which is disclosed in any one of claims 1-9, is applied to a composite robot system comprising a moving chassis of a carrying mechanism, a mechanical arm, a local camera arranged at the tail end of the mechanical arm and a global camera arranged in an external environment, and comprises the following components: the global camera is fixedly arranged on a bracket at the periphery of the operation area, and the view field range of the global camera covers the parking area of the carrying mechanism and the area where the target object is located; The automatic transfer robot is characterized in that a mechanical arm is arranged in the operation area and arranged on a transfer mechanism trolley, and a local camera is arranged at the tail end of the mechanical arm, wherein the transfer mechanism is an intelligent transfer robot AGV.
Description
Composite robot positioning method and system based on global and local distributed vision cooperation Technical Field The invention relates to the technical field of visual positioning, in particular to a composite robot positioning method and system based on global and local distributed visual cooperation. Background In the smart manufacturing and flexible logistics scenario, the compound robot (Mobile Manipulator) is widely used because it has both the mobile capability of the transport mechanism (e.g., AGV) and the handling capability of the robotic arm. In order to achieve accurate grabbing or docking, the visual positioning technology is a core technology. The existing compound robot vision positioning method mainly comprises two main schemes of single-end vision (Eye-in-Hand) and single-global vision (Eye-to-Hand), but has obvious defects in practical application. Single end vision mounts a camera only at the end of the robotic arm. This approach is accurate, but has a limited field of view (FOV). Since the navigational positioning accuracy of a transport mechanism (e.g., AGV) chassis is typically on the order of centimeters, the target object tends to be out of view of the end camera when there is a deviation in the transport mechanism (e.g., AGV) docking position. This not only greatly reduces the efficiency of operation, but also increases the risk of the robot arm colliding with the surrounding environment. The single global vision is positioned using only an external fixed camera or a global camera on the body of the transport mechanism (e.g., AGV). The method has wide visual field coverage and can quickly lock the target, but as the camera gets far from the target, the pixel density is reduced, and the positioning precision (usually in the centimeter level) cannot meet the requirements of precise assembly or grabbing (usually in the millimeter level or even in the sub-millimeter level). In addition, the existing visual positioning system is often cracked, global vision is only used for navigation of a carrying mechanism (such as an AGV), local vision is only used for servo of a mechanical arm, a mechanism of 'visual field decoupling and data cooperation' is lacked, and the global large visual field advantage cannot be effectively and smoothly transited to the local high-precision advantage, so that the system is easy to suffer from positioning loss or action jamming in the linking stage of 'large-range approaching' and 'small-range finishing'. Therefore, there is a need to develop a distributed visual positioning method capable of cooperating with an external global large visual field and an end local high precision, so as to solve the contradiction between the visual field range and the positioning precision and realize the efficient, collision-free and high-precision operation of the compound robot. Disclosure of Invention The invention overcomes the defects of the prior art, provides a composite robot positioning method and system based on global and local distributed vision cooperation, improves the operation beat and positioning robustness, and realizes the compatibility of a large visual field and high precision. In order to achieve the purpose, the technical scheme adopted by the invention is that the composite robot positioning method based on the cooperation of global and local distributed vision comprises the following steps: S1, acquiring operation scene information and pose information of a target object and a carrying mechanism under a global coordinate system by using a global camera, and constructing an environment safety envelope; S2, reversely calculating the predicted pose of the mechanical arm based on the pose information and the view field parameters of the local camera, wherein the predicted pose comprises the tail end pose of the mechanical arm for enabling the target object to be located in the optimal imaging view field range of the local camera; Step S3, planning a collision-free path through a pre-established global and local distributed vision co-location model based on a predicted position and a global camera-acquired environment safety envelope, and controlling a mechanical arm to move to the predicted position based on the collision-free path to finish visual field handover of a vision system; And S4, acquiring an image of the target object by using a local camera, and guiding the tail end of the mechanical arm to correct the position deviation in real time by adopting a visual servo control method until the position deviation is positioned to the target operation position. In a preferred embodiment of the present invention, the predicted pose solving method in step S2 includes: Establishing a local observation coordinate system with pose information of a target object as an origin; calculating an ideal pose of an end camera in the local camera under an observation coordinate system according to the optimal observation distance d opt and the optical axis direction