CN-116787423-B - Reactive interaction for robotic applications and other automated systems
Abstract
The present disclosure relates to reactive interactions for robotic applications and other automated systems. The methods presented herein provide predictive control of a robot or automated assembly while performing a particular task. The task to be performed may depend on the position and orientation of the robot performing the task. The predictive control system may determine the state of the physical environment at each of the sequence of time steps and may select the appropriate position and orientation at each of these time steps. At various time steps, the optimization process may determine a series of future sequences of movements or accelerations to be taken that conform to one or more constraints of the movement. For example, at each time step, a corresponding action in the sequence may be performed, and then another sequence of movements predicted for the next time step, which may help drive the robot movement based on the predicted future movement and allow for a fast reaction.
Inventors
- YANG WEI
- B. Shunda lalingham
- PAXTON COLIN
- CAKMAK MEHMET DR.
- Chao Yuwei
- D. Fox
- I. Aquinola
Assignees
- 辉达公司
Dates
- Publication Date
- 20260512
- Application Date
- 20230224
- Priority Date
- 20220630
Claims (20)
- 1. A method for controlling a robot, comprising: Receiving data representative of an environment, the environment including an object held by a human hand; Determining a target capture option from one or more potential capture options corresponding to the robot based at least in part on evaluating the one or more potential capture options; Determining a motion sequence corresponding to between an initial position and a final position of the robot according to a target capture option, the motion sequence determined based at least in part on minimizing one or more cost functions and satisfying one or more motion constraints; Causing the robot to perform respective motions in the sequence of motions at respective time steps corresponding to one or more motions in the sequence of motions; detecting contact of the robot with the object corresponding to the target gripping position, and Causing an end effector of the robot to grasp the object.
- 2. The method of claim 1, wherein the one or more motion constraints include at least one of a constraint that limits acceleration, a constraint that favors linear motion, a constraint that avoids collisions, or a constraint that avoids occlusion of a sensor used to capture the data.
- 3. The method of claim 1, wherein determining the motion sequence comprises controlling an MPC system using model prediction to perform at least one optimization algorithm having the one or more motion constraints.
- 4. The method of claim 3, wherein determining the target grabbing option is performed using the MPC system.
- 5. The method of claim 3, wherein the MPC system is to optimize the motion sequence for each of the one or more potential gripping options.
- 6. The method of claim 1, further comprising: the one or more motion constraints are modified based at least in part on one or more user inputs.
- 7. The method of claim 1, further comprising: Monitoring whether the end effector is in contact with the human hand during the respective time steps of movement, and Upon determining that the end effector has contacted the human hand, one or more operations are performed.
- 8. The method of claim 1, wherein each motion in the sequence of motions is determined using one or more joint accelerations optimized for the respective time step.
- 9. A method for controlling a robot, comprising: determining a set of positions at which the robot performs an action during the time step sequence; Determining a motion sequence between a current position and a final position in the set of positions that satisfies one or more motion constraints, the motion sequence allowing a target position in the set of positions to be changed at each time step in the sequence of time steps; Causing the robot to perform respective motions in the sequence of motions at respective ones of the sequence of time steps to move at least a portion of the robot relative to the target location, and The robot is caused to perform the action when at least a portion of the robot is determined to be within a threshold distance from the target location.
- 10. The method of claim 9, wherein one or more locations in the set of locations comprise location and orientation information.
- 11. The method of claim 9, wherein the motion sequence is determined using a predictive model and one or more optimization criteria.
- 12. The method of claim 11, wherein the predictive model optimizes the motion sequence over the set of locations and evaluates the target location at each time step in the time step sequence.
- 13. The method of claim 9, wherein the one or more motion constraints include at least one of a constraint that limits acceleration, a constraint that favors linear motion, a constraint that avoids collisions, or a constraint that avoids occlusion of a sensor.
- 14. The method of claim 9, further comprising: during the time step sequence, capturing sensor data representative of a physical environment in which the robot is to perform the action, and The sequence of motions is determined based at least in part on the determined change in the physical environment.
- 15. A system for controlling a robot, comprising: one or more processing units for: determining a set of positions at which the robot performs an action during the time step sequence; Determining a motion sequence between a current position and a final position in the set of positions that satisfies one or more motion constraints, the motion sequence allowing a target position in the set of positions to be changed at each time step in the sequence of time steps; Causing the robot to perform respective motions in the sequence of motions at respective ones of the sequence of time steps to move at least a portion of the robot relative to the target location, and The method further includes causing the robot to perform the action when the at least a portion of the robot is determined to be within a threshold distance from the target location.
- 16. The system of claim 15, wherein the one or more processing units are further to: A prediction model is used to determine the motion sequence based at least in part on one or more optimization criteria of the motion sequence.
- 17. The system of claim 16, wherein the one or more processing units are further to: optimizing the sequence of movements over the set of positions using the predictive model and evaluating the target position at each time step in the sequence of time steps.
- 18. The system of claim 15, wherein the one or more processing units are further to: during the time step sequence, capturing sensor data representative of a physical environment in which the robot is to perform the action, and The change in target location is determined based at least in part on the determined change in the physical environment.
- 19. The system of claim 15, wherein the one or more motion constraints include at least one of a constraint that limits acceleration, a constraint that favors linear motion, a constraint that avoids collisions, or a constraint that avoids occlusion of a sensor.
- 20. The system of claim 15, wherein the system comprises at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; A system for performing a simulation operation; A system for performing digital twinning operations; a system for performing collaborative content creation of a 3D asset; A system for performing a deep learning operation; a system implemented using edge devices; A system implemented using a robot; a system for performing a conversational AI operation; a system for generating synthetic data; A system comprising one or more virtual machine VMs; a system implemented at least partially in a data center, or A system implemented at least in part using cloud computing resources.
Description
Reactive interaction for robotic applications and other automated systems Cross Reference to Related Applications The present application claims priority from U.S. provisional patent No. 63/321,755, entitled "reactive handoff with fast joint spatial Model prediction behavior" (Reactive Handovers with Fast Joint-Space Model-PREDICTIVE BEHAVIOR) filed 3/20 of 2022, the entire contents of which are incorporated herein for all purposes. Background Robots and other automated devices are increasingly being used to assist in performing a variety of tasks. At least some of these tasks involve interactions with humans or other entities, such as performing handoff actions, where the robot will grasp and remove objects from the human hand. For such actions, it is important that the robot grasp the object in such a way that it does not pinch or otherwise contact the person or entity from which the object is to be removed. In existing systems, the movements of robots when performing such tasks may be unsmooth, not intuitive, or unreliable, which may result in a human being making quick or unexpected movements. Such movement may increase the likelihood of contact with the robot, or may result in a human hand blocking a camera used to provide the robot with an ambient view during the movement, as well as other such undesirable actions. Drawings Various embodiments according to the present disclosure will be described with reference to the accompanying drawings, in which: fig. 1A, 1B, 1C, and 1D illustrate images of a robot performing a handover operation according to at least one embodiment; Fig. 2A, 2B, 2C, and 2D illustrate a method of a robot gripping an object during a handover operation according to at least one embodiment; 3A, 3B, and 3C illustrate reactive actions that a robot may take due at least in part to a change in environmental state in accordance with at least one embodiment; FIG. 4 illustrates an example system for a robot to perform one or more actions in an environment in accordance with at least one embodiment; FIGS. 5A and 5B illustrate an example process for moving a robot or automated assembly to a determined position and orientation to perform a task in accordance with at least one embodiment; FIG. 6 illustrates components of a distributed system that may be used to cause a robot to perform one or more tasks in accordance with at least one embodiment; FIG. 7A illustrates inference and/or training logic in accordance with at least one embodiment; FIG. 7B illustrates inference and/or training logic in accordance with at least one embodiment; FIG. 8 illustrates an example data center system in accordance with at least one embodiment; FIG. 9 illustrates a computer system in accordance with at least one embodiment; FIG. 10 illustrates a computer system in accordance with at least one embodiment; FIG. 11 illustrates at least a portion of a graphics processor in accordance with one or more embodiments; FIG. 12 illustrates at least a portion of a graphics processor in accordance with one or more embodiments; FIG. 13 is an example data flow diagram of a high-level computing pipeline in accordance with at least one embodiment; FIG. 14 is a system diagram of an example system for training, adapting, instantiating, and deploying a machine learning model in a high-level computing pipeline in accordance with at least one embodiment and 15A and 15B illustrate a data flow diagram of a process for training a machine learning model, and a client-server architecture that utilizes a pre-trained annotation model to augment an annotation tool, in accordance with at least one embodiment. Detailed Description In the following description, various embodiments will be described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the embodiments. It will be apparent, however, to one skilled in the art that the embodiments may be practiced without some of these specific details. Furthermore, well-known features may be omitted or simplified in order not to obscure the described embodiments. The systems and methods described herein may be used by, but are not limited to, non-autonomous vehicles, semi-autonomous vehicles (e.g., in one or more Adaptive Driver Assistance Systems (ADASs)), driving and non-driving robots or robotic platforms, warehouse vehicles, off-road vehicles, vehicles coupled to one or more trailers, aircraft, boats, shuttles, emergency response vehicles, motorcycles, electric or motorized bicycles, airplanes, engineering vehicles, underwater vehicles, drones, and/or other vehicle types. Further, the systems and methods described herein may be used for a variety of purposes, such as, but not limited to, for machine control, machine motion, machine driving, synthetic data generation, model training, perception, augmented reality, virtual reality, mixed reality, robotics, security and surveillance, analog and digital twinning, autonomous o