Search

CN-121973188-A - Method and system for sensing and self-adapting grabbing posture of cover-connected centrifuge tube

CN121973188ACN 121973188 ACN121973188 ACN 121973188ACN-121973188-A

Abstract

The invention discloses a method and a system for sensing and self-adaptively grabbing a cover-connected centrifuge tube gesture, wherein the method comprises the steps of imaging a transparent cover-connected centrifuge tube, processing 2.5D texture images and 3D point cloud data through fusion, extracting and decoupling joint characteristics of a tube body and a bottle cap, calculating accurate gesture information of the centrifuge tube, planning grabbing point positions and paw opening and closing degrees of a mechanical arm according to the calculated gesture information, controlling a hybrid-driven self-adaptive paw to execute grabbing action, and detecting and adjusting multi-dimensional stability in the grabbing process and after grabbing. According to the invention, through implementing anti-interference 2.5D-3D fusion high-precision gesture sensing on the transparent cover-connected centrifuge tube and performing intelligent planning and self-adaptive grabbing control based on sensing results, stable and accurate automatic operation on complex optical characteristic targets is realized, grabbing success rate and efficiency are remarkably improved, reliability is high, and the potential of grabbing scenes to other transparent objects is provided.

Inventors

  • LI YIYANG
  • ZHANG ZHUANGZHUANG

Assignees

  • 宁波兴博元智能技术有限公司

Dates

Publication Date
20260505
Application Date
20260113

Claims (10)

  1. 1. The method for sensing and self-adaptively grabbing the posture of the centrifuge tube with the cover is characterized by comprising the following steps of: s1, based on gesture perception of 3D vision, imaging a transparent cover-connected centrifuge tube, and extracting and decoupling joint characteristics of a tube body and a bottle cap by fusing 2.5D texture images and 3D point cloud data to calculate accurate gesture information of the centrifuge tube; And S2, self-adaptive grabbing based on pose information, namely planning grabbing point positions and paw opening and closing degrees of the mechanical arm according to the calculated pose information, controlling the self-adaptive paw driven by the hybrid to execute grabbing actions, and detecting and adjusting multi-dimensional stability in the grabbing process and after grabbing.
  2. 2. The method for sensing and adaptively grabbing the cover-attached centrifuge tube according to claim 1, wherein the step S1 specifically comprises: s11, acquiring anti-interference multi-mode image data by projecting a frequency-phase jointly modulated structured light pattern and combining a polarization filtering technology, wherein the multi-mode image data comprises a 3D point cloud acquired by a binocular stereo camera and a 2.5D texture image acquired by a high-resolution 2D camera; S12, according to the material and the reflection characteristic of the centrifuge tube, the light intensity ratio of the multi-partition light source is adaptively adjusted, and pulse illumination is carried out on the joint of the tube body and the bottle cap so as to enhance the edge contrast; S13, performing cross-modal registration and fusion on the 2.5D texture image and the 3D point cloud, and filling a point cloud missing area caused by a transparent tube body in a layering manner; s14, extracting edge gradient characteristics of the 2.5D image and normal vector characteristics of the 3D point cloud through a multi-scale fusion module, generating joint characteristics, and separating the joint characteristics into a pipe body characteristic channel and a bottle cap characteristic channel by utilizing a characteristic decoupling attention head; And S15, processing the separated features by using a model combining the reference feature library and an improved iterative nearest point algorithm, and rapidly calculating the pose of the centrifuge tube.
  3. 3. The method for sensing and adaptively grabbing the cover-attached centrifuge tube according to claim 1, wherein in the step S12, the optimum intensity ratio of dome light to bar light is adaptively output according to the environmental light intensity and the target material parameters by means of a light source mode mapping function trained by machine learning, and the duty ratio of the pulsed illumination is 50% and the pulse frequency is 100Hz.
  4. 4. The method for sensing and adaptively grabbing the cover-attached centrifuge tube according to claim 1, wherein in the step S14, the extracted feature points are screened by using an improved Soft-NMS algorithm, the improved Soft-NMS algorithm suppresses low-confidence feature points by using a gaussian decay function, and eliminates the feature points of the occlusion region determined by the depth difference of the point cloud.
  5. 5. The method for sensing and adaptively grabbing the gesture of the covered centrifuge tube according to claim 1, wherein in the step S15, the improved iterative closest point algorithm adopts a two-stage strategy of key feature point pre-matching and local fine iteration, and the rapid calculation is accelerated by a heterogeneous computing architecture consisting of a CPU, a GPU and an FPGA.
  6. 6. The method for sensing and adaptively grabbing the cover-attached centrifuge tube according to claim 1, wherein the step S2 specifically comprises: s21, planning an optimal grabbing point by adopting a particle swarm optimization algorithm based on the calculated pose information and point cloud data, and calculating a target joint angle of the mechanical arm and an initial opening and closing degree of the paw; S22, controlling the mechanical arm to move to a target position, performing grabbing by adopting a self-adaptive paw in a pneumatic flexible driving and servo motor driving mixed mode, and adjusting grabbing force in real time and compensating gesture offset through pressure closed-loop control; S23, the grabbing stability is evaluated by fusing the 3D vision, pressure and acceleration sensor data, and pressure fine adjustment, pose correction or secondary grabbing is triggered in a grading mode according to the evaluation result.
  7. 7. The method for sensing and adaptively grabbing the centrifuge tube with the cap according to claim 1, wherein in the step S21, the initial opening and closing degree Dgrip of the gripper is calculated according to the formula "opening and closing degree=tube diameter d+2×safety margin s+pressure compensation amount Δd", wherein D is the tube diameter fitted by the point cloud, S is the safety margin dynamically adjusted according to the tube wall thickness, and Δd is the pressure compensation amount calculated by the target grabbing force and the gripper flexibility coefficient.
  8. 8. A covered centrifuge tube pose sensing and adaptive grabbing system for implementing the method of any of claims 1-7, comprising: The visual acquisition module is used for acquiring multi-mode data comprising a 2.5D texture image and a 3D point cloud, and comprises a binocular stereo camera, a structured light source, a polarization filtering component, a high-resolution 2D camera and a multi-partition light source system; The data processing module is used for fusing the multi-mode data, extracting features and resolving the pose, and adopts a CPU, a GPU and an FPGA heterogeneous architecture; The grabbing execution module is used for executing grabbing actions and comprises a multi-degree-of-freedom mechanical arm and a self-adaptive paw arranged at the tail end of the mechanical arm, and the paw is integrated with a pressure sensor; And the control module is used for receiving the pose resolving result, planning grabbing tracks and parameters, and controlling the grabbing execution module to complete self-adaptive grabbing and stability adjustment.
  9. 9. The system for sensing and adaptively grabbing the cover-connected centrifuge tube gesture according to claim 8, wherein the vision acquisition module further comprises an FPGA time synchronization triggering module for synchronously controlling the acquisition actions of the structured light source, the binocular stereo camera and the high-resolution 2D camera, and the multi-partition light source system comprises a dome light source and a strip light source.
  10. 10. The system for sensing and self-adapting grabbing the posture of the centrifuge tube with the cover according to claim 8, wherein the self-adapting paw is of a three-finger symmetrical structure and adopts a hybrid driving mode of combining pneumatic driving and servo motor driving, and the control module comprises a main controller for decision planning and a motion controller for real-time motion control and pressure closed-loop control.

Description

Method and system for sensing and self-adapting grabbing posture of cover-connected centrifuge tube Technical Field The invention relates to the technical field of automatic grabbing, in particular to a method and a system for sensing and self-adapting grabbing of the gesture of a centrifuge tube with a cover. Background In biomedical and chemical experiments, centrifuge tubes are commonly used laboratory instruments, particularly coverlid centrifuge tubes, which are widely used for their ability to effectively prevent sample contamination and leakage. With the development of experimental automation, the automatic grabbing requirement on centrifuge tubes is increasing. However, the nature of transparent capped centrifuge tubes presents a number of challenges for automated gripping. The traditional grabbing method mostly adopts mechanical positioning or 2D visual positioning, the precision of the mechanical positioning mode is low, the small change of the posture of the centrifuge tube is difficult to adapt, the 2D visual positioning cannot acquire the depth information of the centrifuge tube, and the centrifuge tube made of transparent materials is easy to be interfered by ambient light, so that the posture identification is inaccurate. In addition, the transparent cover centrifuge tube of different specifications has the difference in shape and size, and traditional grabbing device's snatch mechanism is fixed, lacks self-adaptation ability, is difficult to realize the stable snatch to different specification centrifuge tubes. Currently, there are also some disadvantages in the prior art of 3D visual perception methods for transparent objects. For example, some methods adopt structured light 3D imaging technology, but for transparent materials, structured light is easy to refract and reflect, so that point cloud data is missing or noise is large, and other methods perform gesture estimation based on deep learning, but a large amount of labeling data is needed, and robustness under a complex background is required to be improved. There is currently no effective solution to the above problems. Disclosure of Invention Aiming at the technical problems in the related art, the invention provides a method and a system for sensing and self-adapting grabbing of the posture of a cover-connected centrifuge tube, which overcome the defects of inaccurate sensing of the posture of the transparent cover-connected centrifuge tube and lack of self-adapting capability of a grabbing mechanism in the prior art, realize accurate posture identification and stable grabbing of the transparent cover-connected centrifuge tube, improve the efficiency and reliability of experimental automation operation, and overcome the defects in the prior art. In order to achieve the technical purpose, the technical scheme of the invention is realized as follows: a method for sensing and self-adaptively grabbing a cover-connected centrifuge tube posture comprises the following steps: s1, based on gesture perception of 3D vision, imaging a transparent cover-connected centrifuge tube, and extracting and decoupling joint characteristics of a tube body and a bottle cap by fusing 2.5D texture images and 3D point cloud data to calculate accurate gesture information of the centrifuge tube; And S2, self-adaptive grabbing based on pose information, namely planning grabbing point positions and paw opening and closing degrees of the mechanical arm according to the calculated pose information, controlling the self-adaptive paw driven by the hybrid to execute grabbing actions, and detecting and adjusting multi-dimensional stability in the grabbing process and after grabbing. Further, the step S1 specifically includes: s11, acquiring anti-interference multi-mode image data by projecting a frequency-phase jointly modulated structured light pattern and combining a polarization filtering technology, wherein the multi-mode image data comprises a 3D point cloud acquired by a binocular stereo camera and a 2.5D texture image acquired by a high-resolution 2D camera; S12, according to the material and the reflection characteristic of the centrifuge tube, the light intensity ratio of the multi-partition light source is adaptively adjusted, and pulse illumination is carried out on the joint of the tube body and the bottle cap so as to enhance the edge contrast; S13, performing cross-modal registration and fusion on the 2.5D texture image and the 3D point cloud, and filling a point cloud missing area caused by a transparent tube body in a layering manner; s14, extracting edge gradient characteristics of the 2.5D image and normal vector characteristics of the 3D point cloud through a multi-scale fusion module, generating joint characteristics, and separating the joint characteristics into a pipe body characteristic channel and a bottle cap characteristic channel by utilizing a characteristic decoupling attention head; And S15, processing the separated features by using a mod