Search

CN-122008255-A - Touch and vision fused smart hand force and position hybrid control method, robot and medium

CN122008255ACN 122008255 ACN122008255 ACN 122008255ACN-122008255-A

Abstract

The application provides a smart hand force and position hybrid control method, a robot and a medium for haptic and vision fusion, and belongs to the technical field of robot control. The method comprises the steps of determining an expected grabbing position qd and an expected grabbing force Fd of a target to be operated, calculating a position difference value eq based on an actual touching position qs and the expected grabbing position qd of a smart hand, calculating an error force eF based on the actual grabbing force Fs and the expected grabbing force Fd, determining a basic PID control signal up based on the position difference value eq, determining a correction signal uF based on the error force eF, obtaining visual characteristic data of the actual touching position qs, calculating a force feedback weight wF of the correction signal uF based on the visual characteristic data, determining a mixed control signal uh based on the force feedback weight wF, the correction signal uF and the basic PID control signal up, and controlling the smart hand to grab the target to be operated based on the mixed control signal uh. The application can improve the control precision of the dexterous hand.

Inventors

  • CHEN LIYANG
  • XU ZHENYANG

Assignees

  • 悟通感控(北京)科技有限公司
  • 悟通感控(山东)科技有限公司

Dates

Publication Date
20260512
Application Date
20260414

Claims (10)

  1. 1. A smart hand force and position hybrid control method for haptic/visual fusion is characterized by comprising the following steps: determining a target to be operated, and determining a desired grabbing position qd and a desired grabbing force Fd based on the target to be operated; controlling the dexterous hand to move to the expected grabbing position qd, and acquiring the actual touching position qs and the actual grabbing force Fs of the dexterous hand in real time; Calculating a position difference eq based on the actual touch position qs and the desired grabbing position qd, and calculating an error force eF based on the actual grabbing force Fs and the desired grabbing force Fd; determining a basic PID control signal up based on the position difference eq, and determining a correction signal uF based on the error force eF; acquiring visual characteristic data of the actual touch position qs, and calculating a force feedback weight wF of a correction signal uF based on the visual characteristic data; determining a mixed control signal uh based on the force feedback weight wF, the correction signal uF and the basic PID control signal up; and controlling the smart hand to grasp the target to be operated based on the mixed control signal uh.
  2. 2. The method according to claim 1, wherein said determining a hybrid control signal uh based on said force feedback weight wF, a correction signal uF and a base PID control signal up comprises: multiplying the force feedback weight wF and the correction signal uF to obtain a weighted correction signal; And superposing the weighted correction signal and the basic PID control signal up to obtain the mixed control signal uh.
  3. 3. The method according to claim 1, wherein said calculating a force feedback weight wF of the correction signal uF based on the visual characteristic data comprises: Calculating a basic weight w0 of the correction signal uF based on the visual characteristic data; Calculating a corresponding weight coefficient w1 according to the error force eF and a preset contact force threshold; And calculating the force feedback weight wF according to the basic weight w0 and the weight coefficient w 1.
  4. 4. The method of claim 3, wherein the visual characteristic data includes a surface deformation rate, a local slip velocity, and a contact distance of the dexterous hand grasping location, wherein calculating the basis weight w0 of the correction signal uF based on the visual characteristic data includes: carrying out weighted summation on the surface deformation rate, the local slip speed and the contact distance in the current visual cycle to obtain a visual state parameter value et; calculating a weight correction value under the current visual period according to the response sensitivity quantized value be, the visual feedback gain ke and the visual state parameter value et of the dexterous hand w; Based on the weight value w0' in the previous vision period and the weight correction value in the current vision period And calculating the basic weight w0 in the current period by w.
  5. 5. The method of claim 1, wherein determining the correction signal uF based on the error force eF comprises: And acquiring a preset force feedback coefficient Kf, and multiplying the force feedback coefficient Kf and the error force eF to obtain a correction signal uF.
  6. 6. The method of claim 1, wherein the acquiring in real time the actual touch position qs and the actual grabbing force Fs of the dexterous hand comprises: acquiring a plurality of original grabbing forces in a window period; calculating a corresponding rate of change for each original gripping force within the window period; The change rate is restrained through a preset limiting function, and each original grabbing force is corrected according to the restrained change rate, so that a corresponding corrected grabbing force is obtained; And carrying out moving average filtering according to the corrected grabbing force corresponding to each original grabbing force in the window period to obtain the actual grabbing force.
  7. 7. The method according to any one of claims 1 to 6, wherein the acquiring in real time the actual touch position qs and the actual grabbing force Fs of the dexterous hand comprises: acquiring an original touch position qs1 of a dexterous hand; Acquiring a position error value qs0 determined based on vision correction; And calculating the actual touch position qs according to the position error value qs0 and the original touch position qs 1.
  8. 8. The method according to any one of claims 1 to 6, further comprising: And stopping updating the hybrid control signal uh when the absolute value of the position difference eq is smaller than a position difference threshold value and the absolute value of the error force eF is also smaller than an error force threshold value, keeping grabbing the target to be operated according to the latest updated hybrid control signal uh, otherwise, continuing to execute the determination of the basic PID control signal up based on the position difference eq, and determining a correction signal uF based on the error force eF.
  9. 9. A computer readable storage medium having stored thereon executable instructions which when executed by a processor cause the processor to perform the method of any of claims 1 to 8.
  10. 10. A robot comprising a robot body, a robot body and a robot body, characterized by comprising the following steps: A robot body; A mechanical arm; The vision module is configured on the robot body or the mechanical arm and is used for shooting an environment image; One or more processors for controlling the robotic arm to perform article gripping; A memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-8.

Description

Touch and vision fused smart hand force and position hybrid control method, robot and medium Technical Field The application relates to the technical field of robot control, in particular to a smart hand force and position hybrid control method for haptic and vision fusion, a robot and a medium. Background The dexterous hand is used as a core execution component for realizing fine grabbing and complex operation, the control performance of the dexterous hand directly determines the precision, safety and reliability of task completion, and the dexterous hand has great application potential in the fields of high requirements of industrial precision assembly (such as single-hand operation mobile phones), medical operation (such as endoscope robots), home service and the like. Along with the continuous expansion of application scenes, the dexterous hands need to deal with diversified grabbing objects (such as fragile products and special-shaped pieces) and complex interaction environments, and the dexterous hands are particularly used in the field of medical operations and can assist surgeons or independently conduct accurate and efficient operation. This puts a dual requirement on its control strategy of "high precision position tracking" and "Gao Roushun contact force adjustment". The existing finger movement control method of the dexterous hand is mainly divided into two types, one is based on position control, the control method uses the finger joint angle or the tail end position as a control target, accurate tracking of the finger joint angle or the tail end position can be achieved, but the problem of damage to an object or sliding of the object easily occurs when the object is contacted, grabbing stability and flexibility are lacked, the other is based on force control, the method uses contact force as a core control target, the contact force is maintained in a preset range through real-time adjustment of driving force, safety and flexibility of a contact process can be guaranteed, accurate movement track tracking cannot be achieved, and complex smart operation is difficult to achieve. Some improved methods attempt to combine position control and force control, as disclosed in patent CN120516743A, CN114474073a, etc., although the control accuracy is improved compared with that of single dimension, the control process is still more complex, there is coupling interference condition of mutual interference in the force adjustment and position adjustment, it is difficult to guarantee both accuracy under dynamic working conditions, and it is not suitable for the requirements of control efficiency and control accuracy under increasingly complex environments. Disclosure of Invention The application aims to provide a novel smart hand force position hybrid control method, a robot and a medium for haptic/vision fusion, so as to solve at least one technical problem. In order to achieve the above object, in a first aspect, the present application provides a smart hand force position hybrid control method for haptic/vision fusion, the method comprising: determining a target to be operated, and determining a desired grabbing position qd and a desired grabbing force Fd based on the target to be operated; controlling the dexterous hand to move to the expected grabbing position qd, and acquiring the actual touching position qs and the actual grabbing force Fs of the dexterous hand in real time; Calculating a position difference eq based on the actual touch position qs and the desired grabbing position qd, and calculating an error force eF based on the actual grabbing force Fs and the desired grabbing force Fd; determining a basic PID control signal up based on the position difference eq, and determining a correction signal uF based on the error force eF; acquiring visual characteristic data of the actual touch position qs, and calculating a force feedback weight wF of a correction signal uF based on the visual characteristic data; determining a mixed control signal uh based on the force feedback weight wF, the correction signal uF and the basic PID control signal up; and controlling the smart hand to grasp the target to be operated based on the mixed control signal uh. Optionally, the determining the hybrid control signal uh based on the force feedback weight wF, the correction signal uF and the basic PID control signal up includes: multiplying the force feedback weight wF and the correction signal uF to obtain a weighted correction signal; And superposing the weighted correction signal and the basic PID control signal up to obtain the mixed control signal uh. Optionally, the calculating the force feedback weight wF of the correction signal uF based on the visual characteristic data includes: Calculating a basic weight w0 of the correction signal uF based on the visual characteristic data; Calculating a corresponding weight coefficient w1 according to the error force eF and a preset contact force threshold; And calculatin