US-12617390-B2 - Path generation based on predicted actions
Abstract
Provided are methods and systems for semantic behavior filtering for prediction improvement. A method for operating an autonomous vehicle is provided. The method includes obtaining, by at least one processor, semantic image data associated with an environment in which an autonomous vehicle is operating. The method includes determining, by the at least one processor, at least one agent in the environment. The method includes determining a predicted action for the at least one agent. The method includes determining an agent predicted path for the at least one agent. The method includes determining a vehicle path of the autonomous vehicle. The method includes determining a predicted collision of the at least one agent and the autonomous vehicle. The method includes simulating actions to avoid the predicted collision. The method includes categorizing the predicted collision as a primary predicted collision based on the simulating actions.
Inventors
- Sammy Omari
- Kevin C. Gall
- Juraj Kabzan
- Hans Andersen
- Bence Cserna
- Scott Drew Pendleton
Assignees
- MOTIONAL AD LLC
Dates
- Publication Date
- 20260505
- Application Date
- 20220722
Claims (20)
- 1 . A method for operating an autonomous vehicle, the method comprising: obtaining, by at least one processor, semantic image data associated with an environment in which an autonomous vehicle is operating; determining, by the at least one processor, a plurality of agents in the environment based on the semantic image data; identifying secondary agents within the plurality of agents, wherein identifying the secondary agents comprises identifying a first secondary agent based on a location of the first secondary agent relative to an object in the environment and identifying a second secondary agent based on a location of the second secondary agent relative to the first secondary agent; identifying a set of primary agents from the plurality of agents, wherein the set of primary agents are separate from the secondary agents; determining a plurality of predicted actions for at least one agent from the set of primary agents; identifying secondary predicted actions within the plurality of predicted actions based on a determined object type of the at least one agent; filtering the secondary predicted actions from the plurality of predicted actions to determine a set of primary predicted actions, wherein a particular action of the plurality of predicted actions is identified as a secondary predicted action for a first object type and is identified as a primary predicted action for a second object type; determining an agent predicted path for the at least one agent based on the set of primary predicted actions; determining a vehicle path of the autonomous vehicle; determining a predicted collision of the at least one agent and the autonomous vehicle based on the agent predicted path for the at least one agent and based on a vehicle path of the autonomous vehicle; simulating actions to avoid the predicted collision; categorizing the predicted collision as a primary predicted collision based on the simulated actions; and transmitting operation instructions associated with taking an action based on categorizing the predicted collision as the primary predicted collision.
- 2 . The method of claim 1 , wherein the primary predicted collision is a predicted collision that cannot be avoided by the autonomous vehicle at a time of simulation.
- 3 . The method of claim 1 , wherein taking the action is further based on a predicted collision mitigation policy.
- 4 . The method of claim 1 , wherein taking the action comprises causing the autonomous vehicle to accelerate.
- 5 . The method of claim 1 , further comprising transforming the agent predicted path and the vehicle path to a vehicle path progress and time map.
- 6 . The method of claim 5 , wherein determining the predicted collision is based at least in part on the vehicle path progress and time map.
- 7 . The method of claim 1 , wherein simulating actions to avoid the predicted collision further comprises at least one of speeding up, slowing down, veering left, and veering right.
- 8 . The method of claim 1 , wherein the identifying the secondary agents further comprises identifying based at least in part on the semantic image data and contextual data associated with a scene corresponding to the semantic image data, wherein the contextual data comprises one or more of: an indication of a position of an agent relative to the autonomous vehicle; an indication of the position of the agent relative to a traffic light; an indication of the position of the agent relative to a traffic sign; an indication of a status of the traffic light; an indication of a type of the traffic sign; an indication of a distance of the agent relative to the autonomous vehicle; an indication of a mobility characteristic of the agent; an indication of a pose of the agent relative to the autonomous vehicle; a speed to the agent; or an indication of a position of the agent relative to an obstacle.
- 9 . The method of claim 8 , wherein an agent from the plurality of agents is identified as a secondary agent based at least in part on whether the contextual data indicates that one or more characteristics of the agent satisfies a threshold.
- 10 . A system, comprising: at least one processor, and at least one non-transitory storage media storing instructions that, when executed by the at least one processor, cause the at least one processor to: obtain, by the at least one processor, semantic image data associated with an environment in which an autonomous vehicle is operating; determine, by the at least one processor, a plurality of agents in the environment based on the semantic image data; identify secondary agents within the plurality of agents, wherein identifying secondary agents comprises identifying a first secondary agent based on a location of the first secondary agent relative to an object in the environment and identifying a second secondary agent based on a location of the second secondary agent relative to the first secondary agent; identify a set of primary agents from the plurality of agents comprising primary agents, wherein the set of primary agents are separate from the secondary agents; determine a plurality of predicted actions for at least one agent from the set of primary agents; identify, secondary predicted actions within the plurality of predicted actions based on a determined object type of the at least one agent; filter the secondary predicted actions from the plurality of predicted actions to determine a set of primary predicted actions wherein a particular action of the plurality of predicted actions is identified as a secondary predicted action for a first object type and is identified as a primary predicted action for a second object type; determine an agent predicted path for the at least one agent based on the set of primary predicted actions; determine a vehicle path of the autonomous vehicle; determine a predicted collision of the at least one agent and the autonomous vehicle based on the agent predicted path for the at least one agent and based on a vehicle path of the autonomous vehicle; simulate actions to avoid the predicted collision; categorize the predicted collision as a primary predicted collision based on the simulated actions; and transmit operation instructions associated with taking an action based on categorizing the predicted collision as the primary predicted collision.
- 11 . The system of claim 10 , wherein the primary predicted collision is a predicted collision that cannot be avoided by the autonomous vehicle at a time of simulation.
- 12 . The system of claim 10 , wherein taking the action is further based on a predicted collision mitigation policy.
- 13 . The system of claim 10 , wherein taking the action comprises causing the autonomous vehicle to accelerate.
- 14 . The system of claim 10 , further comprising transforming the agent predicted path and the vehicle path to a vehicle path progress and time map.
- 15 . The system of claim 14 , wherein determining the predicted collision is based at least in part on the vehicle path progress and time map.
- 16 . The system of claim 10 , wherein simulating actions to avoid the predicted collision further comprises at least one of speeding up, slowing down, veering left, and veering right.
- 17 . At least one non-transitory storage media storing instructions that, when executed by at least one processor, cause the at least one processor to perform operations comprising: obtaining, by the at least one processor, semantic image data associated with an environment in which an autonomous vehicle is operating; determining, by the at least one processor, a plurality of agents in the environment based on the semantic image data; identifying secondary agents within the plurality of agents, wherein identifying the secondary agents comprises identifying a first secondary agent based on a location of the first secondary agent relative to an object in the environment and identifying a second secondary agent based on a location of the second secondary agent relative to the first secondary agent; identify a set of primary agents from the plurality of agents comprising primary agents, wherein the set of primary agents are separate from the secondary agents; determining a plurality of predicted actions for at least one agent from the set of primary agents; identifying secondary predicted actions within the plurality of predicted actions based on a determined object type of the at least one agent; filtering the secondary predicted actions from the plurality of predicted actions to determine a set of primary predicted actions, wherein a particular action of the plurality of predicted actions is identified as a secondary predicted action for a first object type and is identified as a primary predicted action for a second object type; determining an agent predicted path for the at least one agent based on the set of primary predicted actions; determining a vehicle path of the autonomous vehicle; determining a predicted collision of the at least one agent and the autonomous vehicle based on the agent predicted path for the at least one agent and based on a vehicle path of the autonomous vehicle; simulating actions to avoid the predicted collision; categorizing the predicted collision as a primary predicted collision based on the simulated actions; and transmitting operation instructions associated with taking an action based on categorizing the predicted collision as the primary predicted collision.
- 18 . The at least one non-transitory storage media of claim 17 , wherein taking the action is further based on a predicted collision mitigation policy, and wherein at least one predicted collision mitigation policy includes causing the autonomous vehicle to accelerate.
- 19 . The at least one non-transitory storage media of claim 17 , further comprising transforming the agent predicted path and the vehicle path to a vehicle path progress and time map.
- 20 . The at least one non-transitory storage media of claim 19 , wherein determining the predicted collision is based at least in part on the vehicle path progress and time map.
Description
BACKGROUND Autonomous vehicles can use a number of methods and systems for determining a trajectory for the autonomous vehicle. However, these methods and systems can require high computational power, which can lead to inefficient computation. Further, the methods and systems can slow the reaction time of the autonomous vehicle, which can lead to real-world complications. BRIEF DESCRIPTION OF THE FIGURES FIG. 1 is an example environment in which a vehicle including one or more components of an autonomous system can be implemented; FIG. 2 is a diagram of one or more systems of a vehicle including an autonomous system; FIG. 3 is a diagram of components of one or more devices and/or one or more systems of FIGS. 1 and 2; FIG. 4A is a diagram of certain components of an autonomous system; FIG. 4B is a diagram of an implementation of a neural network; FIGS. 4C and 4D are a diagram illustrating example operation of a CNN; FIGS. 5A-5B are operation flow diagrams illustrating example filtering operations of the planning system; FIG. 6 is a flow diagram illustrating an example of a routine implemented by a planning system for controlling a vehicle based on a path generated using primary agents; FIG. 7 is a flow diagram illustrating an example of a routine implemented by a planning system for controlling a vehicle based on a path generated using primary actions; FIG. 8 is a position over time representation of an implementation of a process for semantic behavior filtering for prediction improvement in the case of an unavoidable collision; FIG. 9 is a flow diagram illustrating an example of a routine implemented by a planning system for semantic behavior filtering for prediction improvement in the case of an unavoidable collision. DETAILED DESCRIPTION In the following description numerous specific details are set forth in order to provide a thorough understanding of the present disclosure for the purposes of explanation. It will be apparent, however, that the embodiments described by the present disclosure can be practiced without these specific details. In some instances, well-known structures and devices are illustrated in block diagram form in order to avoid unnecessarily obscuring aspects of the present disclosure. Specific arrangements or orderings of schematic elements, such as those representing systems, devices, modules, instruction blocks, data elements, and/or the like are illustrated in the drawings for ease of description. However, it will be understood by those skilled in the art that the specific ordering or arrangement of the schematic elements in the drawings is not meant to imply that a particular order or sequence of processing, or separation of processes, is required unless explicitly described as such. Further, the inclusion of a schematic element in a drawing is not meant to imply that such element is required in all embodiments or that the features represented by such element may not be included in or combined with other elements in some embodiments unless explicitly described as such. Further, where connecting elements such as solid or dashed lines or arrows are used in the drawings to illustrate a connection, relationship, or association between or among two or more other schematic elements, the absence of any such connecting elements is not meant to imply that no connection, relationship, or association can exist. In other words, some connections, relationships, or associations between elements are not illustrated in the drawings so as not to obscure the disclosure. In addition, for ease of illustration, a single connecting element can be used to represent multiple connections, relationships or associations between elements. For example, where a connecting element represents communication of signals, data, or instructions (e.g., “software instructions”), it should be understood by those skilled in the art that such element can represent one or multiple signal paths (e.g., a bus), as may be needed, to affect the communication. Although the terms first, second, third, and/or the like are used to describe various elements, these elements should not be limited by these terms. The terms first, second, third, and/or the like are used only to distinguish one element from another. For example, a first contact could be termed a second contact and, similarly, a second contact could be termed a first contact without departing from the scope of the described embodiments. The first contact and the second contact are both contacts, but they are not the same contact. The terminology used in the description of the various described embodiments herein is included for the purpose of describing particular embodiments only and is not intended to be limiting. As used in the description of the various described embodiments and the appended claims, the singular forms “a,” “an” and “the” are intended to include the plural forms as well and can be used interchangeably with “one or more” or “at least one,” unless the co