Search

CN-121979184-A - Collision early warning method and device in teleoperation of robot

CN121979184ACN 121979184 ACN121979184 ACN 121979184ACN-121979184-A

Abstract

The application relates to the technical field of robots and discloses a collision early warning method and a device in teleoperation of a robot, wherein the method comprises the steps of obtaining visual information of a working environment of a remote robot and motion state data of an end effector; the method comprises the steps of carrying out environment three-dimensional modeling based on visual information, generating a three-dimensional environment model containing obstacles, dynamically generating a virtual protection area surrounding an end effector based on terminal speed parameters in motion state data and system communication processing time delay, expanding the space range of the virtual protection area along with the increase of the terminal speed and the system communication processing time delay, carrying out collision risk assessment on the virtual protection area and the obstacles in the three-dimensional environment model to obtain the current collision risk level, generating early warning information based on the collision risk level, and outputting a visual enhancement image through an operator terminal to guide an operator to avoid collision risk based on the early warning information. The application can improve teleoperation safety.

Inventors

  • HE YU
  • XIE KUN

Assignees

  • 自贡优龙智能科技有限公司
  • 深圳市优必选科技股份有限公司

Dates

Publication Date
20260505
Application Date
20260205

Claims (12)

  1. 1. The collision early warning method in the teleoperation of the robot is characterized by comprising the following steps of: visual information of a remote robot working environment and movement state data of an end effector are acquired; Performing environment three-dimensional modeling based on visual information, generating a three-dimensional environment model containing an obstacle, and dynamically generating a virtual protection area surrounding the end effector based on the end speed parameter in the motion state data and the system communication processing time delay, wherein the space range of the virtual protection area is expanded along with the increase of the end speed and the system communication processing time delay; Performing collision risk assessment on the virtual protection area and the obstacle in the three-dimensional environment model to obtain a current collision risk level; And generating early warning information based on the collision risk level, and outputting a visual enhancement image through an operator terminal so as to guide an operator to avoid collision risk based on the early warning information.
  2. 2. The method for collision warning in robot teleoperation according to claim 1, wherein the performing the three-dimensional modeling of the environment based on the visual information to generate the three-dimensional environment model including the obstacle comprises: Acquiring a stereoscopic image sequence of a global scene through a first visual sensor deployed in a working environment, and acquiring a local depth image sequence of a working area through a second visual sensor installed at the tail end of a robot; Processing the stereoscopic image sequence to generate a first dense depth map sequence, and performing space-time registration on the local depth image sequence and the first dense depth map sequence to obtain a registered second dense depth map sequence; and performing dense surface reconstruction and incremental fusion according to the second dense depth map sequence and a camera pose estimation result of the image frame corresponding to the second visual sensor to generate the three-dimensional environment model.
  3. 3. The collision pre-warning method in robot teleoperation according to claim 2, wherein the generating the three-dimensional environment model by performing dense surface reconstruction and incremental fusion according to the second dense depth map sequence and the camera pose estimation result of the image frame corresponding to the second visual sensor comprises: Estimating the initial camera pose of each image frame based on the observation of the visual inertial odometer tightly coupled characteristic points and inertial measurement data, wherein the characteristic points are image characteristics of physical space points obtained from a plurality of visual detection; Selecting a key frame meeting the moving distance or visual angle change condition, performing bundle set adjustment on the key frame and the three-dimensional map points obtained by triangularization of the feature points, and optimizing the initial camera pose and the coordinates of the three-dimensional map points to obtain a globally consistent sparse map; Fusing the second dense depth map serving as an observation input into a three-dimensional voxel grid in a weighted average mode by adopting a voxel-based truncated symbol distance function method; Periodically extracting zero isosurface in the three-dimensional voxel grid, generating a triangular grid model comprising a surface geometric structure and texture mapping, and taking the triangular grid model as the three-dimensional environment model.
  4. 4. The method for collision pre-warning in a teleoperation of a robot according to claim 1, wherein the dynamically generating a virtual protection area around the end effector based on a terminal velocity parameter in the motion state data and a system communication processing delay comprises: Determining a speed compensation distance based on the terminal speed parameter and the system communication processing time delay; determining a target size of a dynamic protection area based on the speed compensation distance and a preset basic safety size; and constructing a three-dimensional geometric body meeting the target size by taking the current position of the end effector as a reference, and taking the three-dimensional geometric body as the virtual protection area.
  5. 5. The method for collision warning in a teleoperation of a robot according to claim 4, wherein the performing collision risk assessment on the virtual protection area and the obstacle in the three-dimensional environment model to obtain a current collision risk level includes: Performing collision detection on the virtual protection area and the obstacle in the three-dimensional environment model, and calculating the shortest predicted distance between the virtual protection area and the obstacle; and comparing the shortest predicted distance with a preset risk threshold value, and outputting the collision risk level.
  6. 6. The method for collision pre-warning in a teleoperation of a robot according to claim 5, wherein the collision detection of the virtual protection area and the obstacle in the three-dimensional environment model and the calculation of the shortest predicted distance between the virtual protection area and the obstacle comprise: Representing the obstacle in the three-dimensional environmental model as at least one directed bounding box; Treating the virtual guard area as a spherical geometry centered on an end effector; Calculating the sum of the projection center distance and the projection radius of the obstacle and the virtual protection area on a candidate separation axis based on a separation axis theorem; And if the situation that the projection center distance is larger than the sum of the projection radiuses does not occur on all the candidate shafts, judging that the virtual protection area intersects with the obstacle, and determining the shortest predicted distance based on the direction with the minimum penetration depth.
  7. 7. The method for collision early warning in a teleoperation of a robot according to claim 5, wherein the collision risk level includes a first risk level, a second risk level, and a third risk level; when the shortest predicted distance is greater than a first risk threshold, determining that the current collision risk level is the first risk level; When the shortest predicted distance is smaller than or equal to a second risk threshold, determining the current collision risk level as the third risk level; When the shortest predicted distance is greater than the second risk threshold and is smaller than or equal to the first risk threshold, determining that the current collision risk level is the second risk level; wherein the first risk threshold is greater than the second risk threshold.
  8. 8. The method for collision pre-warning in teleoperation of a robot according to claim 5, further comprising outputting a haptic feedback signal corresponding to the collision risk level through the force feedback device; the generating of the haptic feedback signal includes: calculating a virtual collision force vector according to the shortest predicted distance and a unit normal vector at the nearest contact point between the virtual protection area and the obstacle; mapping the virtual collision force vector into virtual moment of each joint by using a plurality He Yake of matrix transposes of the slave end robot; Converting the virtual moment into an operation force signal of the operator terminal through a motion mapping relation between the slave robot and the operator terminal; and scaling the operation force signal according to the motion scale relation between the operator terminal and the robot, and outputting the touch feedback signal.
  9. 9. The method for collision pre-warning in robotic teleoperation of claim 7, wherein the visual enhancement image comprises: the semitransparent mark graph is overlapped and displayed at the tail end position of the robot and used for representing the space range of the current virtual protection area; a safe operation boundary rendered around the third risk level barrier for indicating a spatial range in which the robot is allowed to safely move, including at least one of a guide channel or a forbidden zone surface; and a dynamic navigation path line or a guiding arrow is overlapped in the operation picture and used for prompting an operator to approach a target or bypass an obstacle.
  10. 10. The method for collision pre-warning in robot teleoperation according to claim 9, wherein the generating pre-warning information based on the collision risk level and outputting a visual enhancement image through an operator terminal to guide an operator to avoid collision risk based on the pre-warning information comprises: When the first risk level is reached, controlling the semitransparent mark graph to be displayed in a first color, and maintaining a normal visual state; When the risk level is at the second risk level, controlling the semitransparent mark graph to be displayed in a second color, and generating a guide channel in the safety operation boundary to prompt an operator to move along a designated path; And when the risk is at the third risk level, controlling the semitransparent mark graph to be displayed in a third color, generating a forbidden zone surface mark on the corresponding dangerous surface, and enhancing the display intensity of the dynamic navigation path line.
  11. 11. The method for collision pre-warning in robot teleoperation according to claim 10, wherein the generating of the guide channel comprises: Using the current position of the robot as a starting point, and planning a local obstacle avoidance path by using a local obstacle avoidance path under the action of a repulsive potential field and an attractive force field; And generating a smooth three-dimensional pipeline grid by spline curve interpolation by taking the local obstacle avoidance path as a central axis and taking a safety proportion which is not larger than the current target size of the virtual protection area as a cross section radius, and taking the three-dimensional pipeline grid as the guide channel of the safety operation boundary.
  12. 12. The utility model provides a collision early warning device in teleoperation of robot which characterized in that includes: the acquisition module is used for acquiring visual information of the remote robot working environment and motion state data of the end effector; the system comprises a generation module, a control module and a control module, wherein the generation module is used for carrying out environment three-dimensional modeling based on visual information, generating a three-dimensional environment model containing an obstacle, and dynamically generating a virtual protection area surrounding the end effector based on the terminal speed parameter in the motion state data and the communication processing time delay of the system, wherein the space range of the virtual protection area is expanded along with the increase of the terminal speed and the communication processing time delay of the system; The evaluation module is used for evaluating collision risk between the virtual protection area and the obstacle in the three-dimensional environment model so as to obtain the current collision risk level; And the early warning module is used for generating early warning information based on the collision risk level and outputting a visual enhancement image through an operator terminal so as to guide an operator to avoid collision risk based on the early warning information.

Description

Collision early warning method and device in teleoperation of robot Technical Field The application relates to the technical field of robots, in particular to a collision early warning method and device in teleoperation of a robot. Background In teleoperation of a robot, due to the existence of communication delay, an operator is difficult to sense the change of the environment from the end in real time, and motion control lag and collision accidents are easily caused. In the prior art, collision early warning is usually carried out by adopting a virtual protection area with a fixed boundary, the protection range cannot be dynamically adjusted according to the tail end speed of the robot and the system time delay, and prospective risk prediction is difficult to realize under the conditions of high-speed operation or high delay. Meanwhile, the traditional method depends on a static environment modeling and passive warning mode, lacks real-time reconstruction and dynamic interaction support for an environment three-dimensional structure, causes untimely and unintuitive early warning, and increases operation cognitive load. Disclosure of Invention In view of the above, the embodiment of the application provides a collision early warning method and device in teleoperation of a robot, which can effectively solve the problems of delayed collision early warning and poor adaptability caused by communication delay and fixed protection boundaries in the prior art, and improve the safety and real-time interaction capability of the teleoperation process. In a first aspect, an embodiment of the present application provides a collision early warning method in teleoperation of a robot, including: visual information of a remote robot working environment and movement state data of an end effector are acquired; Performing environment three-dimensional modeling based on visual information, generating a three-dimensional environment model containing an obstacle, and dynamically generating a virtual protection area surrounding the end effector based on the end speed parameter in the motion state data and the system communication processing time delay, wherein the space range of the virtual protection area is expanded along with the increase of the end speed and the system communication processing time delay; Performing collision risk assessment on the virtual protection area and the obstacle in the three-dimensional environment model to obtain a current collision risk level; And generating early warning information based on the collision risk level, and outputting a visual enhancement image through an operator terminal so as to guide an operator to avoid collision risk based on the early warning information. In an alternative embodiment, the three-dimensional modeling of the environment based on the visual information, generating a three-dimensional environment model including the obstacle, includes: Acquiring a stereoscopic image sequence of a global scene through a first visual sensor deployed in a working environment, and acquiring a local depth image sequence of a working area through a second visual sensor installed at the tail end of a robot; Processing the stereoscopic image sequence to generate a first dense depth map sequence, and performing space-time registration on the local depth image sequence and the first dense depth map sequence to obtain a registered second dense depth map sequence; and performing dense surface reconstruction and incremental fusion according to the second dense depth map sequence and a camera pose estimation result of the image frame corresponding to the second visual sensor to generate the three-dimensional environment model. In an optional embodiment, the generating the three-dimensional environment model by performing dense surface reconstruction and incremental fusion according to the second dense depth map sequence and the camera pose estimation result of the image frame corresponding to the second visual sensor includes: Estimating the initial camera pose of each image frame based on the observation of the visual inertial odometer tightly coupled characteristic points and inertial measurement data, wherein the characteristic points are image characteristics of physical space points obtained from a plurality of visual detection; Selecting a key frame meeting the moving distance or visual angle change condition, performing bundle set adjustment on the key frame and the three-dimensional map points obtained by triangularization of the feature points, and optimizing the initial camera pose and the coordinates of the three-dimensional map points to obtain a globally consistent sparse map; Fusing the second dense depth map serving as an observation input into a three-dimensional voxel grid in a weighted average mode by adopting a voxel-based truncated symbol distance function method; Periodically extracting zero isosurface in the three-dimensional voxel grid, generating a triangular grid model