Search

CN-121973246-A - Self-adaptive grabbing system and method for industrial part mixed-flow assembly by using human-shaped robot

CN121973246ACN 121973246 ACN121973246 ACN 121973246ACN-121973246-A

Abstract

The invention provides a self-adaptive grabbing system and a self-adaptive grabbing method for an industrial part mixed-flow assembly human-shaped robot, which belong to the technical field of artificial intelligence, precise instruments and part assembly logistics, wherein the system comprises: the system comprises a material property analysis module and a visual feedback adjustment module which are operated on a built-in computer of the robot, an image acquisition module fixed on the head of the robot, and a self-adaptive grabbing module positioned on the hand of the robot, wherein the image acquisition module and the self-adaptive grabbing module are connected with the built-in computer of the robot. And acquiring an optical image of the target part on the assembly line, classifying material properties and evaluating vulnerability of various heterogeneous target parts, and dynamically generating robot hand control parameters matched with the material. The invention solves the problems of hidden cracking, scratching or deformation of the parts caused by single control parameters when the traditional manipulator faces to industrial parts with fragile, flexible and rigid mixing on an assembly line, and realizes flexible self-adaptive grabbing with low cost and high precision.

Inventors

  • PENG HAIBO
  • Ni Zhengwenxi
  • QU XIANGLIN
  • YANG YUAN
  • MA QIANZHI
  • SHEN KAIYAN
  • PAN TING
  • LIU CHONG
  • YIN ZHIWEI
  • ZHU XUANYI

Assignees

  • 云南开放大学(云南国防工业职业技术学院)

Dates

Publication Date
20260505
Application Date
20260403

Claims (6)

  1. 1. The utility model provides an industry spare part mixed flow assembly is with human-shaped robot self-adaptation system of snatching, the system is applied to on the assembly line of different material parts, and it includes material attribute analysis module and visual feedback adjustment module, the image acquisition module of fixing at the robot head on the built-in computer of robot to and be located the self-adaptation of robot hand snatch the module, just the image acquisition module with the self-adaptation snatchs the module and all links to each other with the built-in computer of robot, its characterized in that: the image acquisition module is used for acquiring images of target parts on an assembly line and comprises an optical image sensing camera, a buffer register and a data transmission interface, wherein: The optical image sensing camera is used for shooting a target part image on an assembly line within the view angle range of the optical image sensing camera and converting an optical signal of the target part image into a first digital image; The buffer register is used for storing the first digital image; the data transmission interface is used for transmitting the first digital image registered by the buffer register to the robot built-in computer; the robot built-in computer is used for executing core algorithm calculation and processing a first digital image from the image acquisition module, and the first digital image is displayed on the robot built-in computer: The material property analysis module is used for carrying out deep feature analysis on the first digital image of the image acquisition module and quantitatively outputting the material category and the vulnerability risk index of the target part, and comprises an image preprocessing unit, a feature fusion unit and a vulnerability assessment unit, wherein: The image preprocessing unit is used for carrying out standardized processing on a first digital image of the image acquisition module, and the operation of the image preprocessing unit comprises Gaussian filtering denoising and histogram equalization enhancement so as to eliminate uneven illumination on an assembly line and interference of sensor background noise and form a standardized image I raw ; The feature fusion unit is used for extracting texture features and color features from the standardized image I raw , calculating the material category and the material confidence score of the target part by adopting a weighted fusion algorithm, wherein the texture features are used for distinguishing the target part with smooth surface and the target part with rough surface; The vulnerability assessment unit is used for combining the material confidence score output by the feature fusion unit and a preset hardness coefficient of the industrial part, and calculating and outputting a vulnerability risk index representing the vulnerability degree of the target part through weighted summation ; The visual feedback adjustment module is used for monitoring the state of a target part in real time in the grabbing execution process and triggering closed loop correction when an abnormality is detected, and comprises a sliding detection unit and a dynamic supplementing unit, wherein: the sliding detection unit is used for calculating an optical flow displacement vector of the target part area in the grabbing and lifting stage, and judging a sliding state when the vertical displacement component exceeds a preset threshold value; the dynamic supplementing unit is used for dynamically calculating the angle increment according to the sliding speed after receiving the sliding state signal and generating a correction control signal of the secondary clamping; The self-adaptive grabbing module is used for receiving the vulnerable risk index from the vulnerable evaluation unit in the material attribute analysis module and the secondary clamping correction control signal in the dynamic supplementing unit of the visual feedback adjustment module, and driving the robot hand to execute grabbing action according to the correction control signal, and comprises a speed planning unit and an angle limiting unit, wherein: The speed planning unit is used for calculating the target speed of the robot hand joint closure according to the vulnerable risk index, generating a low-speed instruction for the target parts with high vulnerable risk so as to reduce contact impact, and generating a standard speed instruction for the rigid target parts with low vulnerable risk; The angle limiting unit is used for calculating the maximum allowable closing angle of the hand joint of the robot according to the vulnerable risk index, generating an angle control signal allowing a larger interference stroke for the flexible target part, and generating an angle control signal limiting the extrusion stroke for the fragile target part, so that flexible self-adaptive grabbing of parts made of different materials is realized.
  2. 2. The adaptive grabbing system for mixed-flow industrial part assembling and human-shaped robot according to claim 1, wherein the image acquisition and optical signal conversion of the optical image sensor camera of the image acquisition module fixed on the robot head is performed according to the following formula: In the formula, The optical signal of the image of the target part acquired for the sensor, In order for the coefficients of the transform to be present, For the digital image output after the discrete cosine transform, i.e., the first digital image, N is the block size of the discrete cosine transform.
  3. 3. The adaptive grabbing system for mixed-flow industrial part assembling according to claim 1, wherein the material property analysis module on the built-in computer of the robot: The image preprocessing unit is used for carrying out standardization processing on the first digital image transmitted by the image acquisition module to form a standardized image I raw , and the processing process comprises the following steps: Gaussian filtering denoising: wherein I raw is a standardized image, G (u, v, be) is a Gaussian kernel function, sigma is a standard deviation, u is a horizontal distance offset, v is a vertical distance offset, and k is the size of the Gaussian kernel; histogram equalization enhancement processing: Wherein L is the number of gray levels, N j is the number of pixels of gray level j, and M×N is the image size; the feature fusion unit firstly extracts texture features and color features from the standardized image I raw , and calculates a comprehensive confidence score M score (I) of the I-th class material of the target part by adopting a weighted fusion algorithm, wherein the specific calculation formula is as follows: Wherein T texture (i) represents texture feature matching degree of the i-th class material, C color (i) represents color feature matching degree of the i-th class material, alpha 1 is texture feature weight coefficient, beta 1 is color feature weight coefficient, and Representing an identification strategy taking texture features as a main and color features as an auxiliary; The vulnerability assessment unit is used for calculating the vulnerability risk index of the current target part based on the comprehensive confidence score output by the feature fusion unit through the following weighted summation function : Wherein N represents the total number of categories in a preset material library of the system, M score (i) is the comprehensive confidence score of the target part belonging to the ith material, K handness (i) is the physical hardness coefficient of the predefined ith material, the coefficient is obtained through a table look-up method, the value range is [0,1], and the closer the value is 1, the weaker the material is; By solving the above formula, the image information having only visual features is quantized into a risk value representing the physical attribute.
  4. 4. The adaptive grabbing system for mixed-flow industrial part assembling of human-shaped robot of claim 1, wherein the visual feedback adjustment module on the built-in computer of the robot: The slip detection unit calculates an optical flow displacement vector of the target part area in real time during the lifting operation of the robot, and adopts the following logic judgment function Identifying whether unexpected relative displacement of the target part occurs: In the formula, An optical flow vertical component representing the centroid of a target part area in an image in pixels/frames; H hand is the current lifting height of the robot hand, h th is the safety height threshold for starting the slip detection, and the threshold ensures that detection logic is activated only after a target part is completely separated from a supporting surface, so that erroneous judgment caused by ground background interference is prevented; The dynamic supplementing unit immediately activates a secondary clamping strategy when receiving a signal that State slip is True, calculates a real-time increment delta theta adj of the joint angle and updates a target instruction theta new , and the specific calculation formula is as follows: In the formula, k p is a dynamic proportional gain coefficient and is used for mapping the sliding optical flow speed into an angle correction amount, the faster the sliding, the more remarkable the clamping force is increased, θ current is a joint angle feedback value at the current moment, and θ new is a target angle instruction sent to a robot hand driver after correction, so that the sliding target parts are dynamically locked.
  5. 5. The adaptive gripping system for industrial part mixed flow assembly of human-shaped robots according to claim 1, wherein the adaptive gripping modules of the robot hand: the speed planning unit calculates a target speed V cmd of robot hand joint closure based on the vulnerable risk index, and the specific calculation formula is as follows: Wherein V base represents the preset hand joint closing speed of the robot hand when the robot hand is in idle load or grabbing rigid target parts, For the vulnerable risk index output by the material attribute analysis module, gamma 1 is a speed attenuation coefficient, the value range is (0, 1), and the coefficient determines the sensitivity of the speed to decrease along with the increase of the risk; the angle limiting unit calculates the maximum allowable closing angle theta limit of the hand joint of the robot according to the vulnerable risk index, and the specific calculation formula is as follows: Wherein θ contact represents an initial angle of the robot finger when the robot finger contacts the surface of the target part, Δθ max represents a maximum mechanical interference stroke angle allowed by the system, and λ 1 is an extrusion protection coefficient for reducing the allowed interference stroke when grabbing the high-risk-of-vulnerability index target part so as to limit the maximum clamping force applied to the surface of the target part.
  6. 6. An adaptive grabbing method based on the adaptive grabbing system of the human-shaped robot for mixed-flow assembly of industrial parts according to claim 1, which is characterized by comprising the following steps: S1, initializing a system and acquiring an image, loading a preset physical hardness coefficient check table of an industrial part by a built-in computer of a robot, and acquiring a real-time state of an image acquisition module through a data transmission interface; an optical image sensing camera of the image acquisition module acquires an image of a target part on an assembly line from the view angle range of the optical image sensing camera and converts an optical signal of the image into a first digital image, an image preprocessing unit of a material attribute analysis module of a robot built-in computer receives the first digital image, performs Gaussian filtering denoising and histogram equalization processing on the first digital image, eliminates uneven illumination on the assembly line and sensor background noise interference, and generates a standardized image I raw ; s2, multidimensional feature fusion and vulnerability assessment, wherein a material attribute analysis module of a robot built-in computer receives the standardized image I raw generated in the step S1, and carries out the following deep feature analysis: Firstly, respectively extracting texture features and color features in a standardized image by a feature fusion unit, and calculating comprehensive confidence scores M score (i) of objects belonging to different material categories by using a weighted fusion algorithm; Then, based on the confidence score and the preset physical hardness coefficient loaded in the step S1, calculating a vulnerability risk index representing the physical vulnerability degree of the part by weighted summation by a vulnerability assessment unit ; S3, generating an adaptive grabbing strategy, and receiving the vulnerable risk index output in the step S2 by an adaptive grabbing module of the robot hand The method comprises the steps of generating a bottom layer control instruction adapting to the current material by dynamic mapping, wherein a speed planning unit calculates the target speed of closing a hand joint of a robot according to a vulnerable risk index, and generates a low-speed instruction for a high-vulnerable risk target part to reduce contact impact, and generates a standard speed instruction for a rigid target part with low vulnerable risk; s4, grabbing execution and slipping monitoring, wherein a robot hand driver executes a speed control signal and an angle control signal generated in the step S3 to grab and lift the target part, meanwhile, a slipping detection unit of a vision feedback adjustment module of a robot built-in computer processes continuous image frames in the grabbing lifting process in real time, an optical flow vertical component of the centroid of the target part area is calculated, and when the component exceeds a preset slipping judgment threshold value and the lifting height meets a safety threshold value, an object is judged to be in a slipping state, and a slipping state signal is generated; S5, dynamic closed loop supplement correction, wherein when the step S4 judges that the sliding state is achieved, the dynamic supplement unit of the visual feedback adjustment module receives the sliding state signal, and immediately activates the following secondary clamping strategy: according to the detected sliding speed, dynamically calculating the real-time increment of the joint angle of the robot, adding the increment to the current target instruction, generating a corrected control signal, sending the corrected control signal to the self-adaptive grabbing module, driving the robot hand to perform secondary clamping until the sliding state is released, and completing the self-adaptive grabbing closed loop.

Description

Self-adaptive grabbing system and method for industrial part mixed-flow assembly by using human-shaped robot Technical Field The invention relates to the technical field of artificial intelligence, precise instruments and component assembly logistics, in particular to a human-shaped robot self-adaptive grabbing system for mixed flow assembly of industrial parts. Background Industrial parts assembly lines gradually evolve from single special machines to mixed-flow assembly lines, and robots often need to continuously handle industrial parts with very different physical properties, such as extremely fragile optical lens modules, deformable antistatic buffer foam, and rigid metal shells, on the same assembly line. How to realize safe and stable grabbing of vulnerable objects or flexible objects is always a key technical difficulty in the field of robot operation. In flexible adaptive gripping based on mechanical structures, the prior art has mainly adapted to the shape of objects by improving the physical configuration of the end effector. For example, CN121179399a discloses an "adaptive gripping mechanical arm", in which, by setting an adaptive gripping module and an inflatable gripping air bag member, the flexible deformation of the air bag is used to wrap the contours of objects with different shapes and sizes, so as to increase the contact area to improve the gripping stability. However, such solutions mainly solve the problem of "geometry fitting", and lack the cognitive ability to "physical material properties" of the object. Because the control system cannot predict the vulnerable degree of the object, when the extremely vulnerable object is faced, the object is damaged due to uncontrollable pressure even if the airbag is inflated or mechanically clamped, and the introduction of the airbag and the hydraulic system obviously increases the volume and the maintenance cost of hardware, so that the control system is difficult to popularize in the lightweight cooperative robot. In the aspect of object material identification auxiliary grabbing, the prior art relies on contact type physical interaction sensing. For example, the object texture recognition method based on the knock sound simulation and the deep learning (publication number CN120105882 a) proposed by Shanghai university of transportation (Shanghai university of transportation) uses a deep learning model to compare a simulated sound with a real sound by knocking an object and collecting a sound signal to recognize a texture. Although the method has higher accuracy in material classification, the method essentially belongs to a 'contact' active detection means. For high-risk fragile objects such as thin-wall glass, precision instruments or chemical reagents, the action of "knocking" itself has a destructive risk. In addition, the method needs to perform knocking, analysis and grabbing, so that the efficiency of continuous operation is greatly reduced, and whether the object slides down or not and is subjected to active remediation cannot be monitored in real time in the grabbing process. In addition, the traditional vision-guided grabbing technology is limited to geometric calculation of the position and the gesture of the object, and understanding of semantic information such as texture and hardness of the object is generally lacking. When facing objects with similar appearance but different materials, the robot often adopts the same set of kinematic parameters, and the phenomenon of sliding caused by too large clamping force to pinch the flat object or insufficient clamping force is very easy to occur. While the introduction of expensive six-dimensional force/touch sensors can solve the force control problem, the high hardware cost limits the large-scale industrial application thereof. In summary, when the prior art solves the problem of capturing objects with complex materials, the following common bottlenecks still exist: The mechanical adaptability scheme lacks knowledge of material vulnerability, and is easy to cause 'blind' extrusion; Contact recognition schemes present a risk of damaging the object and are inefficient; traditional vision schemes only focus on geometric locations and ignore physical attributes, and force control hardware is costly. Therefore, there is an urgent need in the art for a self-adaptive grabbing system capable of performing non-contact material sensing by using low-cost monocular vision, dynamically planning grabbing strategies by using a vulnerability assessment algorithm, and having a function of detecting and correcting a closed loop by sliding down, so as to realize low-cost, high-safety and flexible operation on multi-material objects. Disclosure of Invention According to the invention, through a deep learning material analysis algorithm and a robot hand control algorithm, a robot hand control strategy is dynamically generated by utilizing vulnerability assessment, and the method comprises a closing speed, a l