US-12625494-B2 - Remote operations of vehicles
Abstract
In various examples, at least partial control of a vehicle may be transferred to a control system remote from the vehicle. Sensor data may be received from a sensor(s) of the vehicle and the sensor data may be encoded to generate encoded sensor data. The encoded sensor data may be transmitted to the control system for display on a virtual reality headset of the control system. Control data may be received by the vehicle and from the control system that may be representative of a control input(s) from the control system, and actuation by an actuation component(s) of the vehicle may be caused based on the control input.
Inventors
- Jen-Hsun Huang
- Prajakta Gudadhe
- Justin Ebert
- Dane Johnston
Assignees
- NVIDIA CORPORATION
Dates
- Publication Date
- 20260512
- Application Date
- 20240718
Claims (20)
- 1 . A method comprising: sending, using an autonomous machine and to a remote system, sensor data obtained using one or more sensors of the autonomous machine as the autonomous machine operates in an environment; receiving, using the autonomous machine and from the remote system, location data indicating a location in the environment for the autonomous machine to navigate toward, the location data determined based at least on one or more waypoints that indicate the location in the environment; determining, based at least on the location data indicating the location in the environment, one or more controls for navigating the autonomous machine toward the location; and controlling, based at least on the one or more controls, the autonomous machine to navigate toward the location in the environment.
- 2 . The method of claim 1 , wherein the one or more waypoints include one or more virtual waypoints within a visualization generated based at least on the sensor data.
- 3 . The method of claim 1 , wherein the one or more waypoints are identified by a remote operator of the remote system during display of a visualization generated based at least on the sensor data.
- 4 . The method of claim 3 , wherein the visualization includes a virtual representation of the environment.
- 5 . The method of claim 4 , wherein the virtual representation is a three-dimensional (3D) virtual representation.
- 6 . The method of claim 4 , wherein the one or more waypoints are identified by the remote operator based at least on the remote operator pointing to one or more virtual locations within the virtual representation of the environment.
- 7 . The method of claim 4 , wherein the one or more waypoints are associated with the virtual representation, and the one or more controls are determined based at least on converting the one or more waypoints to one or more real-world waypoints.
- 8 . The method of claim 4 , wherein the virtual representation of the environment is capable of being displayed from different vantage points.
- 9 . The method of claim 3 , further comprising: determining, based at least on the sensor data, that the autonomous machine has encountered a situation that requires at least partial control from the remote system, wherein the display of the visualization includes the situation.
- 10 . The method of claim 3 , wherein the display by the remote system further includes a display of one or more video streams represented by the sensor data.
- 11 . The method of claim 1 , wherein during the navigate toward the location in the environment, the autonomous machine is capable of executing a safety procedure and disregarding the location data indicating the location in the environment received from the remote system.
- 12 . The method of claim 1 , further comprising: determining, based at least on the one or more waypoints, a path through the environment, wherein the one or more controls are determined based at least on the path.
- 13 . The method of claim 1 , wherein the method is executed using at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing deep learning operations; a system implemented using a machine; a system for generating synthetic data; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources.
- 14 . A method comprising: receiving, at a remote system and from an autonomous machine, sensor data obtained using one or more sensors of the autonomous machine as the autonomous machine operates in an environment; displaying a visualization generated based at least on the sensor data; receiving one or more inputs indicating, within the visualization, one or more waypoints for the autonomous machine to navigate to within the environment; and sending data indicating the one or more waypoints to the autonomous machine to control the autonomous machine to navigate toward a location in the environment.
- 15 . The method of claim 14 , wherein the one or more waypoints include one or more virtual waypoint locations within the visualization.
- 16 . The method of claim 14 , wherein the visualization includes a virtual representation of the environment.
- 17 . The method of claim 16 , wherein the virtual representation is a three-dimensional (3D) virtual representation.
- 18 . The method of claim 16 , wherein the one or more waypoints are identified by a remote operator based at least on the remote operator pointing to one or more virtual locations within the virtual representation of the environment.
- 19 . The method of claim 14 , wherein the visualization is capable of being displayed from different vantage points.
- 20 . The method of claim 14 , wherein the visualization includes a presentation of one or more video streams represented by the sensor data.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 18/343,442, filed Jun. 28, 2023, which is a continuation of U.S. patent application Ser. No. 17/379,691, filed Jul. 19, 2021, which is a continuation of U.S. patent application Ser. No. 16/366,506, filed Mar. 27, 2019, which claims the benefit of U.S. Provisional Application No. 62/648,493, filed on Mar. 27, 2018. Each of these applications is incorporated herein by reference in its entirety. BACKGROUND As autonomous vehicles become more prevalent and rely less on direct human control, the autonomous vehicles may be required to navigate environments or situations that are unknown to them. For example, navigating around pieces of debris in the road, navigating around an accident, crossing into oncoming lanes when a lane of the autonomous vehicle is blocked, navigating through unknown environments or locations, and/or navigating other situations or scenarios may not be possible using the underlying systems of the autonomous vehicles while still maintaining a desired level of safety and/or efficacy. Some autonomous vehicles, such as those capable of operation at autonomous driving levels 3 or 4 (as defined by the Society of Automotive Engineers (SAE) “Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles”), include controls for a human operator. As such, conventional approaches to handling the above described situations or scenarios have included handing control back to a passenger of the vehicle. (e.g., a driver). However, for autonomous vehicles of autonomous driving level 5, there may not be a driver, or controls for a driver, so it may not be possible to pass control to a passenger of the autonomous vehicle (or a passenger may be unfit to drive). As another example, the autonomous vehicle may not include passengers (e.g., an empty robo-taxi), or may not be large enough to hold passengers, so control of the autonomous vehicles may be completely self-contained. Some conventional approaches have provided some level of remote control of autonomous vehicles by using a two-dimensional (2D) visualizations projected onto 2D displays, such as computer monitors or television displays. For example, the 2D display(s) at a remote operator's position may display image data (e.g., a video stream(s)) generated by a camera(s) of the autonomous vehicle to the remote operator, and the remote operator may control the autonomous vehicle using control components of a computer, such as a keyboard, mouse, joystick, and/or the like. However, using only a 2D visualization on a 2D display(s) may not provide enough immersion or information for the remote operator to control the autonomous vehicle as safely as desired. For example, the remote operator may not gain an intuitive or natural sense of locations of other objects in the environment relative to the autonomous vehicle by looking at a 2D visualization on s 2D display(s). In addition, providing control of an autonomous vehicle from a remote location using generic computer components (e.g., keyboard, mouse, joystick, etc.) may not lend itself to natural control of the autonomous vehicle (e.g., as a steering wheel, brake, accelerator, and/or other vehicle components would). For example, a correlation (or scale) between inputs to a keyboard (e.g., a left arrow selection) and control of the autonomous vehicle (e.g., turning to the left) may not be known, such that smooth operation may not be achievable (e.g., operation that may make the passengers feel comfortable). Further, by providing only a 2D visualization, valuable information related to the state of the autonomous vehicle may not be presentable to the remote operator in an easily digestible format, such as the angle of the wheels, the current position of the steering wheel, and/or the like. SUMMARY Embodiments of the present disclosure relate to remote control of autonomous vehicles. More specifically, systems and methods are disclosed that relate to transferring at least partial control of the autonomous vehicle and/or another object to a remote control system to allow the remote control system to aid the autonomous vehicle and/or other object in navigating an environment. In contrast to conventional systems, such as those described above, the systems of the present disclosure leverage virtual reality (VR) technology to generate an immersive virtual environment for display to a remote operator. For example, a remote operator (e.g., a human, a robot, etc.) may have at least partial control of the vehicle or other object (e.g., a robot, an unmanned aerial vehicle (UAV), etc.), and may provide controls for the vehicle or other object using a remote control system. Sensor data from the vehicle or other object may be sent from the vehicle or the other object to the remote control system, and the remote control system may generate and render a virtual environment for display u