Search

CN-122008256-A - Robot vision positioning calibration method and system based on smart hand touch sense

CN122008256ACN 122008256 ACN122008256 ACN 122008256ACN-122008256-A

Abstract

The application provides a robot vision positioning calibration method and system based on smart hand touch, and belongs to the technical field of robot vision positioning. The method comprises the steps of obtaining an initial task scene model constructed based on environment image data, determining a target calibration point position and initial coordinates of the target calibration point position based on the initial task scene model, determining a target stress point position and an action calibration strategy of a dexterous hand based on the target calibration point position and the initial coordinates, controlling the dexterous hand to execute the action calibration strategy, determining actual coordinates of the target calibration point position based on real-time action coordinates of the dexterous hand when the dexterous hand contacts the target calibration point position through the target stress point position, calculating to obtain an error correction value based on the initial coordinates and the actual coordinates, and correcting the initial task scene model based on the error correction value to obtain a real task scene model. The application can calibrate through tactile feedback, thereby improving the visual positioning precision of the robot.

Inventors

  • CHEN LIYANG
  • YU YAYUN

Assignees

  • 悟通感控(北京)科技有限公司
  • 悟通感控(山东)科技有限公司

Dates

Publication Date
20260512
Application Date
20260414

Claims (10)

  1. 1. The robot vision positioning calibration method based on smart hand touch is characterized by comprising the following steps of: Acquiring an initial task scene model constructed based on environmental image data, and determining a target calibration point location and initial coordinates of the target calibration point location based on the initial task scene model; determining a target stress point position and an action calibration strategy of the dexterous hand based on the target calibration point position and the initial coordinates, and controlling the dexterous hand to execute the action calibration strategy; When the smart hand is contacted with the target calibration point through the target stress point, determining the actual coordinates of the target calibration point based on the real-time action coordinates of the smart hand; and calculating an error correction value based on the initial coordinate and the actual coordinate, and correcting the initial task scene model based on the error correction value to obtain a real task scene model.
  2. 2. The robot vision positioning calibration method based on smart hand touch according to claim 1, wherein the determining the target stress point position and the action calibration strategy of the smart hand based on the target calibration point position and the initial coordinates, and controlling the smart hand to execute the action calibration strategy, comprises: Determining a target stress point position and a preliminary calibration action of the dexterous hand based on the target calibration point position and the initial coordinates; Controlling the dexterous hand to execute the preliminary calibration action, and determining an actual stress point position when the dexterous hand feeds back a contact signal; determining the position offset of the dexterous hand based on the target stress point and the actual stress point; and determining a target calibration action of the smart hand based on the position offset, and controlling the smart hand to execute the target calibration action so as to enable the smart hand to contact with the target calibration point through the target stress point.
  3. 3. The robot vision positioning calibration method based on smart hand touch as set forth in claim 1, wherein when the smart hand contacts the target calibration point through the target force point, determining the actual coordinates of the target calibration point based on the real-time motion coordinates of the smart hand further comprises, before: obtaining the tactile characteristic information of the target calibration point; When a contact signal is fed back by a target stress point of the dexterous hand, determining a verification action strategy corresponding to the tactile characteristic information, and controlling the dexterous hand to execute the verification action strategy; And acquiring a touch signal fed back by the smart hand when the verification action strategy is executed, and determining that the smart hand is contacted with the target calibration point location through the target stress point location when the touch signal accords with the touch characteristic information.
  4. 4. The robot vision positioning calibration method based on smart hand touch as claimed in claim 3, wherein the touch characteristic information includes that when a smart hand touches the target calibration point, a force point and a force area do not change along with a touch direction of the smart hand, and when the target force point of the smart hand feeds back a touch signal, a verification action strategy corresponding to the touch characteristic information is determined, and the smart hand is controlled to execute the verification action strategy, including: When a contact signal is fed back by a target stress point of the dexterous hand, determining that a verification action strategy corresponding to the tactile characteristic information is to change the finger tip touch direction of the dexterous hand, and controlling the dexterous hand to change the finger tip touch direction for a plurality of times; the obtaining the haptic signal fed back by the smart hand when the verification action strategy is executed, and determining that the smart hand contacts the target calibration point location through the target stress point location when the haptic signal accords with the haptic characteristic information comprises the following steps: And acquiring a touch signal fed back by the dexterous hand when the finger tip touch direction is changed for a plurality of times, and determining that the dexterous hand is contacted with the target calibration point through the target stress point when the stress point and the stress area of the dexterous hand are not changed by analyzing the touch signal.
  5. 5. The robot vision positioning calibration method based on smart hand touch according to claim 1, wherein there are a plurality of target calibration points, and the calculating an error correction value based on the initial coordinates and the actual coordinates includes: forming a point pair set based on the initial coordinates and the corresponding actual coordinates of each target calibration point; And taking the point location set as input of a preset space transformation model, and solving to obtain an error correction value.
  6. 6. The robot vision positioning calibration method based on smart hand touch as claimed in claim 1, wherein the smart hand comprises a plurality of fingers, each finger is provided with a target stress point, and when the smart hand is contacted with the target calibration point through the target stress point, determining the actual coordinates of the target calibration point based on the real-time motion coordinates of the smart hand comprises: When each finger of the dexterous hand contacts with the target calibration point through the corresponding target stress point, determining the actual coordinates of the target calibration point based on the real-time action coordinates of each finger; the calculating, based on the initial coordinates and the actual coordinates, an error correction value includes: calculating a single correction value corresponding to each finger based on the initial coordinates and the actual coordinates calculated corresponding to each finger; the error correction value is calculated based on the single correction value corresponding to each finger.
  7. 7. The robot vision positioning calibration method based on smart hand touch according to claim 1, further comprising, before the acquiring an initial task scene model constructed based on environmental image data and determining a target calibration point location and initial coordinates of the target calibration point location based on the initial task scene model: Acquiring environment image data acquired by a binocular camera, and correcting the environment image data based on correction parameters obtained by pre-test; Three-dimensional matching is carried out on the corrected environment data image through a semi-global block matching algorithm, and a parallax image is obtained; acquiring depth information of each pixel point in the parallax map, and back-projecting each pixel point into a three-dimensional space according to the depth information of each pixel point to form a three-dimensional point cloud; And connecting the three-dimensional point cloud into a three-dimensional grid, mapping the color information of the environment data image onto the three-dimensional grid, and generating an initial task scene model.
  8. 8. A robot vision positioning calibration method based on smart hand touch as claimed in any one of claims 1 to 7, further comprising: when an operation task is received, an operation scene model corresponding to the operation task is obtained; Determining initial operation coordinates of an operation target in the operation task based on the operation scene model; And determining real operation coordinates based on the error correction value and the initial operation coordinates to execute the operation task based on the real operation coordinates.
  9. 9. A robot vision positioning calibration system based on smart hand touch, the system comprising: the robot controller is used for acquiring an initial task scene model; The action control module is used for determining a target calibration point position, initial coordinates of the target calibration point position, a target stress point position of the dexterous hand and an action calibration strategy based on the initial task scene model; A smart hand controller for controlling a smart hand to perform the action calibration strategy, the smart hand being provided with a tactile sensor; The touch data processing module is used for analyzing a touch signal fed back by the touch sensor, determining the actual coordinate of the target calibration point location based on the real-time action coordinate of the smart hand when the smart hand is determined to be in contact with the target calibration point location through the target stress point location based on the touch signal, and calculating to obtain an error correction value based on the initial coordinate and the actual coordinate; the robot controller is further configured to correct the initial task scene model based on the error correction value, so as to obtain a real task scene model.
  10. 10. The robot vision positioning calibration system based on smart hand touch of claim 9, further comprising: The task module is used for receiving a visual positioning calibration task; The binocular camera is used for collecting environment image data; and the visual information control module is used for constructing an initial task scene model based on the environment image data.

Description

Robot vision positioning calibration method and system based on smart hand touch sense Technical Field The application relates to the technical field of robot vision positioning, in particular to a robot vision positioning calibration method and system based on smart hand touch. Background The robot smart hand as a high-freedom robot end effector simulating the hand function plays an indispensable or alternative core role in the fields of industrial precision assembly, dangerous environment operation, especially minimally invasive surgery robots and the like. One of the core functions is to model space information of a task scene through a vision system, so that accurate positioning and operation are realized. At present, the existing robot vision positioning technology mainly relies on a camera to collect image data and utilizes an image processing algorithm to construct a three-dimensional model so as to realize the perception and understanding of the surrounding environment. The vision-based positioning method can achieve certain positioning precision in an ideal environment, but in practical application, large deviation exists in spatial information modeling based on images due to factors such as distortion, calibration errors and data processing errors of a camera. In addition, the severe change of external ambient light and poor illumination conditions can also interfere with visual image information, and further influence the positioning accuracy. These factors together limit the accuracy of robot vision positioning and make it difficult to meet the requirements of high-accuracy operation and assembly. Therefore, a method for effectively improving the visual positioning accuracy of the robot is needed to overcome the defects in the prior art. Disclosure of Invention The application aims to provide a robot vision positioning calibration method and system based on smart hand touch, which are used for solving at least one technical problem. To achieve the above object, in a first aspect, the present application provides a robot vision positioning calibration method based on smart hand touch, the method comprising: Acquiring an initial task scene model constructed based on environmental image data, and determining a target calibration point location and initial coordinates of the target calibration point location based on the initial task scene model; determining a target stress point position and an action calibration strategy of the dexterous hand based on the target calibration point position and the initial coordinates, and controlling the dexterous hand to execute the action calibration strategy; When the smart hand is contacted with the target calibration point through the target stress point, determining the actual coordinates of the target calibration point based on the real-time action coordinates of the smart hand; and calculating an error correction value based on the initial coordinate and the actual coordinate, and correcting the initial task scene model based on the error correction value to obtain a real task scene model. In some embodiments, the determining the target stress point location and the action calibration strategy of the smart hand based on the target calibration point location and the initial coordinates, and controlling the smart hand to execute the action calibration strategy, includes: Determining a target stress point position and a preliminary calibration action of the dexterous hand based on the target calibration point position and the initial coordinates; Controlling the dexterous hand to execute the preliminary calibration action, and determining an actual stress point position when the dexterous hand feeds back a contact signal; determining the position offset of the dexterous hand based on the target stress point and the actual stress point; and determining a target calibration action of the smart hand based on the position offset, and controlling the smart hand to execute the target calibration action so as to enable the smart hand to contact with the target calibration point through the target stress point. In some embodiments, before determining the actual coordinates of the target calibration point location based on the real-time motion coordinates of the smart hand when the smart hand is in contact with the target calibration point location through the target force point location, the method further comprises: obtaining the tactile characteristic information of the target calibration point; When a contact signal is fed back by a target stress point of the dexterous hand, determining a verification action strategy corresponding to the tactile characteristic information, and controlling the dexterous hand to execute the verification action strategy; And acquiring a touch signal fed back by the smart hand when the verification action strategy is executed, and determining that the smart hand is contacted with the target calibration point location through the target