Search

CN-121973198-A - Industrial robot control method and system based on visual hand tracking

CN121973198ACN 121973198 ACN121973198 ACN 121973198ACN-121973198-A

Abstract

The invention provides an industrial robot control method and system based on visual hand tracking, which are characterized in that a camera module is used for collecting continuous images and transmitting the images to an image processing module, the image processing module is used for carrying out various image processing on the images, then the hand positions are identified by utilizing key point detection and model reasoning, hand position difference data such as horizontal movement distance, height movement distance and the like are calculated and sent to a data analysis module, the data analysis module carries out actual distance and pose analysis according to the hand position difference data, and the analyzed distance and pose data are sent to the industrial robot for controlling the industrial robot.

Inventors

  • LIU CHUN
  • XIA MING

Assignees

  • 武汉理工大学

Dates

Publication Date
20260505
Application Date
20260123

Claims (10)

  1. 1. The industrial robot control method based on visual hand tracking is characterized by comprising the following steps: Step S 1 , the camera module collects two continuous images and transmits the images to the image processing module through a signal line; Step S 2 , an image processing module processes the image acquired by the camera module and transmits hand position difference data to a data analysis module through a signal line; And step S 3 , analyzing the actual distance and the pose according to the hand position difference data by the data analysis module, and transmitting the distance and the pose which are given by the analysis and are actually required to act to the industrial robot for control.
  2. 2. The industrial robot control method based on visual hand tracking according to claim 1, wherein step S 2 comprises the following specific contents: Step S 21 , the Image processing module uses the Image transmitted by the adaptive threshold processing camera module to obtain an adaptive binarized Image; Step S 22 , performing continuous expansion processing on the Image by the Image processing module to obtain an Image 1 ; Step S 23 , performing continuous corrosion treatment on the Image 1 by using an Image processing module to obtain an Image 2 ; Step S 24 , the Image processing module performs continuous connected domain processing on the Image 2 to obtain an Image 3 ; Step S 25 , the Image processing module synthesizes the images 1 、Image 2 、Image 3 to perform key point detection and model reasoning, and identifies the hand position; Step S 26 , after the image processing module processes the two continuous pictures respectively, comparing the hand positions, namely calculating the horizontal movement distance of the hand by the difference value of the hand position pixel points in the two pictures and calculating the height movement distance of the hand by the change of the hand area in the two pictures; In step S 27 , the image processing module transmits the hand position difference data such as the horizontal movement distance and the height movement distance to the data analysis module through the signal line.
  3. 3. The industrial robot control method based on visual hand tracking according to claim 2, wherein step S 3 comprises the following steps: Step S 31 , the data analysis module carries out Kalman filtering processing on the distance and pose analysis so as to smooth data and reduce noise interference; And step S 32 , the data analysis module sends the optimal state data smoothed by the Kalman filtering to the industrial robot to control the industrial robot.
  4. 4. The industrial robot control method based on visual hand tracking according to claim 2 or 3, wherein the key point detection in step S 25 adopts a MEDIAPIPE HANDS model, and outputs 21 hand joint point coordinates, and the model reasoning is based on lightweight CNN, and inputs fusion characteristics of Image and Image 1 、Image 2 、Image 3 , and the robustness is improved by fusing multi-stage Image processing results.
  5. 5. The method according to claim 4, wherein the horizontal movement distance of the hand in step S 26 is calculated by the following formula: In the formula, The hand position center point coordinates for the previous image, For the hand position center point coordinates of the latter image, The difference is the horizontal movement distance of the hand position.
  6. 6. The method according to claim 5, wherein the height movement distance of the hand in step S 26 is calculated by the following formula: wherein, (C x ,C y ) is the coordinates of the central point of the screen, (-) is shown in the specification x 2 , Y 2 ) is the deviation value of the hand and the center point of the screen, f is the equivalent focal length of the camera, f x is the equivalent focal length of the camera in the horizontal direction (X axis), f y is the equivalent focal length of the camera in the vertical direction (Y axis), S p is the actual pixel area of the hand in the camera, S is the center equivalent pixel area after hand conversion, S p' is the last hand area, S is the size change of the hand area, S 1 、S 2 is the center equivalent pixel area after the hand conversion of the front picture and the back picture, deltaz is the height moving distance, and k is the calibration parameter.
  7. 7. The method according to claim 6, wherein the distance and pose analysis in step S 31 is performed by an inverse kinematics analysis method, specifically comprising the following steps: For industrial robots, the coordinates of the wrist center point P w are back-deduced from the hand pose (X, Y, Z) using: Wherein P e is the hand position of the end effector, d 6 is the length from the wrist center to the end effector, expressed by the length of the 6 th two sides, and Z is the representation of the Z-axis unit vector of the end effector coordinate system in the base coordinate system; then solving for hand joint angles, wherein: Where θ i is the ith joint rotation angle, d 2 is the 2 nd joint offset, d 1 is the base height, a i is the length of the ith link, For the arm end-to-wrist transformation matrix, For the hand gesture matrix, R i,j is the wrist-to-end transformation matrix The ith row and jth column element of (c).
  8. 8. The industrial robot controller based on visual hand tracking of claim 7 The method is characterized in that the kalman filter processing in step S 32 includes the following steps: Definition of the definition 6 Joint angles at time k are shown; According to the optimal estimation state of the industrial robot joint at the previous moment, predicting the prior estimation value of the joint angle state at the current moment Estimating covariance a priori A is a state transition matrix, Q is a process noise covariance, and the method is as follows: In the formula, To reflect the model uncertainty of the kth joint; calculating Kalman gain matrix at current moment H is the observation matrix, which determines how much trust should be given to the current observation, i.e. the angle directly calculated by inverse kinematics, since the angle can be measured directly h=i 6 , R is the observation noise covariance, , Determined by visual positioning errors; Correcting the prior estimated state by using the observed value z k of the current joint angle to obtain an optimal state estimated value after filtering and smoothing at the current moment ; Updating covariance estimation corresponding to optimal estimation state at current moment Preparing for filtering calculation at the next moment, wherein P k is a state estimation covariance matrix, and the initial value is 。
  9. 9. The industrial robot controller based on visual hand tracking of claim 8 The method is characterized in that the optimal state data in the step S 32 is an optimal state estimated value after Kalman filtering smoothing 。
  10. 10. A residual life lightweight prediction system based on feature decoupling and sparse optimization, comprising: The camera module is used for collecting images and is connected with the image processing module through a signal wire; The image processing module is used for processing the image acquired by the camera module and is connected with the data analysis module through a signal wire; the data analysis module is used for analyzing the actual distance and the pose of the hand position difference data in the image and is connected with the industrial robot through a signal line.

Description

Industrial robot control method and system based on visual hand tracking Technical Field The invention relates to the technical field of industrial robot control, in particular to an industrial robot control method and system based on visual hand tracking. Background In the process of converting industrial automation into intellectualization, the interaction mode of the traditional industrial robot faces fundamental challenges. The existing industrial robot depends on a preset program or a demonstrator to control, and can meet basic operation requirements and realize loading and unloading, carrying, welding, spraying, polishing and the like, but the flexibility and the adaptability of man-machine cooperation still have limitations. Operators need to write a preset program with a certain programming skill or indirectly control the action of the mechanical arm through a complex interface, and visual natural interaction cannot be realized, so that a worker has a certain difficulty in operating the industrial robot. To overcome these limitations, it is desirable to develop an industrial robot real-time control system that enables natural interaction. Disclosure of Invention The technical problem to be solved by the invention is to provide the industrial robot control method and the system based on visual hand tracking, which aim at the existing problems, and realize natural interaction through visual hand tracking. Embodiments of the present application are implemented as follows: the embodiment of the application provides an industrial robot control method based on visual hand tracking, which is characterized by comprising the following steps: Step S 1, the camera module collects two continuous images and transmits the images to the image processing module through a signal line; Step S 2, an image processing module processes the image acquired by the camera module and transmits hand position difference data to a data analysis module through a signal line; And step S 3, analyzing the actual distance and the pose according to the hand position difference data by the data analysis module, and transmitting the distance and the pose which are given by the analysis and are actually required to act to the industrial robot for control. In some alternative embodiments, step S 2 includes the following specific details: Step S 21, the Image processing module uses the Image transmitted by the adaptive threshold processing camera module to obtain an adaptive binarized Image; Step S 22, performing continuous expansion processing on the Image by the Image processing module to obtain an Image 1; Step S 23, performing continuous corrosion treatment on the Image 1 by using an Image processing module to obtain an Image 2; Step S 24, the Image processing module performs continuous connected domain processing on the Image 2 to obtain an Image 3; Step S 25, the Image processing module synthesizes the images 1、Image2、Image3 to perform key point detection and model reasoning, and identifies the hand position; Step S 26, after the image processing module processes the two continuous pictures respectively, comparing the hand positions, namely calculating the horizontal movement distance of the hand by the difference value of the hand position pixel points in the two pictures and calculating the height movement distance of the hand by the change of the hand area in the two pictures; In step S 27, the image processing module transmits the hand position difference data such as the horizontal movement distance and the height movement distance to the data analysis module through the signal line. In some alternative embodiments, step S 3 includes the following: Step S 31, the data analysis module carries out Kalman filtering processing on the distance and pose analysis so as to smooth data and reduce noise interference; And step S 32, the data analysis module sends the optimal state data smoothed by the Kalman filtering to the industrial robot to control the industrial robot. In some optional embodiments, the key point detection in step S 25 adopts a MEDIAPIPE HANDS model, and outputs 21 hand joint point coordinates, and the model reasoning is based on lightweight CNN, and is input as fusion characteristics of Image and Image 1、Image2、Image3, and the robustness is improved by fusing multi-stage Image processing results. In some alternative embodiments, the horizontal movement distance of the hand described in step S 26 is calculated by the following formula: In the formula, The hand position center point coordinates for the previous image, For the hand position center point coordinates of the latter image,The difference is the horizontal movement distance of the hand position. In some alternative embodiments, the height movement distance of the hand described in step S 26 is calculated by the following equation: wherein, (C x,Cy) is the coordinates of the central point of the screen, (-) is shown in the specification x2,Y 2) is the deviation