Search

CN-122008180-A - Stacked workpiece grabbing method, system and storage medium based on machine vision

CN122008180ACN 122008180 ACN122008180 ACN 122008180ACN-122008180-A

Abstract

The invention discloses a machine vision-based stacked workpiece grabbing method, a machine vision-based stacked workpiece grabbing system and a machine vision-based stacked workpiece grabbing storage medium, which combine a 3D camera, a mechanical arm, an end effector and an improved target detection and point cloud registration algorithm, and realize high-precision target identification and positioning by collecting and processing images and point cloud data, optimize grabbing paths, improve efficiency and effectively solve the problems of low detection precision, insufficient point cloud registration precision, low grabbing efficiency and the like of a traditional grabbing system in a complex stacked workpiece scene. The system aims at improving industrial production efficiency, reducing labor cost and realizing efficient and intelligent automatic grabbing.

Inventors

  • CHEN YAXIN
  • LI ZHENGKAI
  • YAN PEIZHENG
  • Xiang Lianhai

Assignees

  • 安徽信息工程学院

Dates

Publication Date
20260512
Application Date
20251113

Claims (10)

  1. 1. A machine vision-based stacked workpiece gripping method, comprising the steps of: step 1, initializing hardware equipment; Step2, data acquisition, namely acquiring color images, depth images and point cloud data in a captured scene; Step 3, target detection, namely processing the acquired color image by using an improved YOLOv target detection algorithm, and identifying a 2D pixel ROI bounding box and a stacking placement state of the target object to be grabbed; Step 4, performing point cloud preprocessing and clipping, performing voxel downsampling and outlier denoising on the acquired point cloud data, screening out the complete target to be grabbed on the uppermost layer, and then acquiring target point cloud to be grabbed in a 3D range ROI bounding box of the target to be grabbed; step 5, point cloud registration, namely, the target point cloud to be grabbed and a preset template point cloud are subjected to rough registration algorithm, and the overlapping degree of the two groups of point clouds is improved; And 6, performing grabbing, and calculating the real physical position of the target object under the world coordinate system according to the pose estimation result of the target object and combining camera calibration parameters and rigid body changes.
  2. 2. The machine vision based stacked workpiece gripping method as claimed in claim 1, wherein in said step 3, the specific method of processing using the modified YOLOv target detection algorithm is as follows: reducing the model parameter based on ShuffleNetv backbone network; Adopting a GSConv Slim Neck structure to perform feature fusion and optimizing feature extraction; Embedding SimAM an attention mechanism; Using SIoU loss functions: ; Wherein: Cross-over ratio; the distance between the predicted frame and the center of the real frame; The diagonal length of the prediction frame and the real frame; And Balance coefficient.
  3. 3. The machine vision-based stacked workpiece grabbing method as claimed in claim 2, wherein the ShuffleNetv backbone network divides the feature map into two parts, one part is directly reserved, and the other part is spliced with the original feature map after convolution operation.
  4. 4. The machine vision based stacked workpiece gripping method of claim 3, wherein said embedding SimAM of the attention mechanism is by simplifying a conventional attention mechanism, improving computational efficiency and model performance, reducing computational complexity and memory footprint, inserting SimAM modules in the model for feature extraction.
  5. 5. The machine vision based stacked workpiece grabbing method as claimed in claim 4, wherein feature fusion optimization is to organically combine GSConv modules and Slim Neck structures, design a lightweight and efficient network architecture to train a model, enable the model to be superior in performance while keeping lightweight, evaluate model performance on a verification set, and optimize feature extraction capability according to feedback adjustment structures and parameters.
  6. 6. The machine vision based stacked workpiece gripping method of any one of claims 1-5, wherein step 5 employs a PCA coarse registration algorithm matching the formulas: ; Wherein: And Point cloud to be registered And The average value of the point cloud; The fine registration, namely inputting the point cloud pair after coarse registration into an improved K-TEASER ++ point cloud fine registration algorithm, and calculating a rotation translation matrix of the target point cloud by pruning the corresponding relation of the features to obtain a final pose estimation value of the target object: ; Wherein: rotating the matrix; Translation vector; And Corresponding points.
  7. 7. The machine vision based stacked workpiece gripping method of claim 6, wherein in step 6, the rotational translation matrix is converted into a quaternion, and the posture of the end effector is adjusted; ; Wherein: Quaternion scalar portions; a unit vector of the rotation shaft; The rotation angle is adopted.
  8. 8. The machine vision-based stacked workpiece grabbing method as claimed in claim 7, wherein in the step 6, the mechanical arm is controlled to execute grabbing actions according to the planned movement track, so as to complete grabbing tasks of the workpiece, and after grabbing, the mechanical arm carries the workpiece to a designated placement area, so that the whole process is completed.
  9. 9. Machine vision-based stacked workpiece grabbing system is characterized in that: the target detection module is used for realizing workpiece identification through a modified YOLOv algorithm; the point cloud registration module is combined with PCA coarse registration and TEASER ++ fine registration technology to realize high-precision pose estimation of the workpiece; the grabbing execution module comprises a six-axis mechanical arm and an end sucker executor and is used for completing flexible grabbing of the workpiece; the system control module is used for integrating the modules to realize intelligent grabbing of stacked workpieces; The system performs a machine vision based stacked workpiece gripping method as defined in any one of claims 1-8.
  10. 10. A storage medium being a computer readable storage medium for storing software program code for performing the machine vision based stacked artifact gripping method according to any of claims 1-8.

Description

Stacked workpiece grabbing method, system and storage medium based on machine vision Technical Field The invention relates to the technical field of industrial robot automation, in particular to an intelligent grabbing system and method for stacked workpieces based on multi-mode sensing, which are particularly suitable for rapid identification, accurate positioning and stable grabbing of multi-target stacked workpieces in complex industrial scenes. Background With the rapid development of industrial automation, robotics are increasingly used in manufacturing, especially in the fields of high-precision gripping and intelligent production. For example, the publication number CN116330245B, patent name "an automatic hand-eye calibration device and method for industrial vision robot", discloses an automatic hand-eye calibration device for industrial vision robot, which comprises a bottom plate, the top of the bottom plate is fixedly equipped with a carriage, the top of the inner wall of the carriage is rotationally connected with a pulley, the bottom of the inner wall of the carriage is fixedly equipped with an electromagnet, the inner wall of the carriage is slidingly sleeved with a sliding block, the top of the sliding block is fixedly equipped with a panel, one side of the top of the panel is fixedly equipped with a bracket, and the other side of the top of the panel is fixedly equipped with a fixing frame. Through the sliding of the sliding block on the inner wall of the sliding block, the magnetic force of the electromagnet and the magnetic force of the sliding block are utilized to resist, the panel can be controlled by the control component to control the magnetic force and the magnetic force direction of the electromagnet, and the panel is connected to the sliding block through the sliding block, so that the device can realize movement through the electromagnet when grabbing and scanning a workpiece, and further the device can realize movement in a large range and carry the workpiece. The intelligent control core of the above publication makes use of a vision system for recognition and judgment, so that machine vision is one of key technologies, and positioning and recognition information of a workpiece are provided for an industrial robot through image acquisition and processing, thereby realizing automatic grabbing. In a practical industrial environment, workpieces are usually present in a complex and unordered stacked state, which places higher demands on the accuracy and efficiency of the robotic gripping system. Core technologies of the robot gripping system include target detection, point cloud registration and mechanical arm control, which are critical in workpiece recognition, positioning and gripping processes. In recent years, the application of the deep learning algorithm in target detection and attitude estimation significantly improves the system performance, but under a complex stacking scene, challenges such as insufficient detection precision, low registration efficiency, insufficient control flexibility of a mechanical arm and the like are still faced. Disadvantages of the prior art: the method has the defect that 1, the accuracy of target detection in a complex stacking scene is insufficient, and in a complex unordered stacking workpiece scene, the existing target detection algorithm is difficult to accurately identify a target object. Shortcomings 2, accuracy and speed of a pose estimation algorithm cannot meet industrial requirements, and the existing pose estimation algorithm often faces double challenges of accuracy and efficiency when processing complex point cloud data. The disadvantage 3 is that the flexibility and the instantaneity of the control of the mechanical arm are insufficient, and the track planning of the mechanical arm and the design of the end effector often cannot be well adapted to complex scenes in the actual grabbing process. Disclosure of Invention According to the improved target detection and point cloud registration algorithm, high-precision target identification and positioning are realized by collecting and processing the image and the point cloud data, the grabbing path is optimized, the efficiency is improved, and the problems of low detection precision, insufficient point cloud registration precision, low grabbing efficiency and the like of a traditional grabbing system in a complex stacked workpiece scene are effectively solved. In order to achieve the above purpose, the technical scheme adopted by the invention is that the machine vision-based stacked workpiece grabbing method comprises the following steps: step 1, initializing hardware equipment; Step2, data acquisition, namely acquiring color images, depth images and point cloud data in a captured scene; Step 3, target detection, namely processing the acquired color image by using an improved YOLOv target detection algorithm, and identifying a 2D pixel ROI bounding box and a stacking placement state