EP-4388509-B1 - DETECTED OBJECT PATH PREDICTION FOR VISION-BASED SYSTEMS
Inventors
- VASUDEVAN, Nanda Kishore
- CHHEDA, Dhiral
Dates
- Publication Date
- 20260506
- Application Date
- 20220819
Claims (15)
- A system for managing vision systems (200) in vehicles (102), the system comprising: one or more computing systems including processing devices and memory, that execute computer-executable instructions, for implementing a vision system (200) processing component (212) operative to: obtain first ground truth label data associated with vision data collected from one or more vision systems (200), wherein the first ground truth label data corresponds to attributes of travel surfaces including at least one of road edges ground truth labels, lane line ground truth labels, or road markings; obtain second ground truth label data associated with vision data collected from the one or more vision systems (200), wherein the second ground truth label data corresponds to attributes of one or more dynamic objects (400, 500); process the first ground truth label data and second ground truth label data to form a plurality of predicted paths of travel for the one or more dynamic objects (400, 500) by applying a kinetics-based model to the attributes of at least one of the travel surfaces or the attributes of the one or more dynamic objects (400, 500), wherein each predicted path of travel is associated with a confidence value; process the plurality of predicted paths of travel based on at least one additional ground truth label to modify at least one predicted path of travel of the plurality of predicted paths of travel, the modification based on application of an updated kinetics-based model to additional attributes of additional detected objects; and store the plurality of predicted paths of travel and associated confidence values.
- The system as recited in Claim 1, wherein the vision system (200) processing component (212) processes the obtained first and second ground truth label data to form a plurality of predicted paths of travel based on selecting potential paths of travel exceeding a minimal confidence value threshold.
- The system as recited in Claim 1, wherein the first and second ground truth label data corresponds to one or more objects detected within a horizon of the captured video data.
- The system as recited in Claim 3, wherein the first and second ground truth label data corresponds to one or more objects detected beyond a current defined location of the vehicle (102).
- The system as recited in Claim 1, wherein the attributes of one or more dynamic objects (400, 500) corresponds to at least one of yaw, velocity or acceleration of the dynamic object (400, 500).
- The system as recited in Claim 1, wherein the vision system (200) processing component (212) processes the plurality of predicted paths of travel based on at least one additional ground truth label by identifying at least one static object that may interfere with a predicted path of travel.
- The system as recited in Claim 1, wherein a sum of confidence values associated with two or more of the plurality of predicted paths of travel exceeds 100%.
- The system as recited in Claim 1, wherein a sum of confidence values associated with the plurality of predicted paths of travel does not exceed 100%.
- The system as recited in Claim 1, wherein the vision system (200) processing component (212) processes the plurality of predicted paths of travel based on modeled feasibility cone for a dynamic object (400, 500) of the one or more dynamic objects (400, 500).
- A method for managing vision systems (200) in vehicles (102), the system comprising: obtaining first ground truth label data associated with vision data collected from one or more vision systems (200), wherein the first ground truth label data corresponds to attributes of travel surfaces; obtaining second ground truth label data associated with vision data collected from the one or more vision systems (200), wherein the second ground truth label data corresponds to attributes of one or more dynamic objects (400, 500); processing the first and second ground truth label data to form a plurality of predicted paths of travel of the one or more dynamic objects (400, 500) by applying a kinetics-based model to the attributes of at least one of the travel surfaces or dynamic object (400, 500), wherein each individual predicted path of travel is associated with a confidence value; and storing the plurality of predicted paths of travel and associated confidence values.
- The method as recited in Claim 10, wherein forming the plurality of predicted paths of travel based on selecting potential paths of travel exceeding a minimal confidence value threshold.
- The method as recited in Claim 10, wherein the first and second ground truth label data corresponds to one or more objects detected within a horizon of the captured video data.
- The method as recited in Claim 10, wherein the first ground truth label data corresponds to attributes of travel surfaces including at least one of road edges ground truth labels, lane line ground truth labels, or road markings.
- The method as recited in Claim 10, wherein the attributes of one or more dynamic objects (400, 500) corresponds to at least one of yaw, velocity or acceleration of the dynamic object (400, 500).
- The method as recited in Claim 10 further comprising processing the plurality of predicted paths of travel based on at least one additional ground truth label.
Description
BACKGROUND Generally described, computing devices and communication networks can be utilized to exchange data and/or information. In a common application, a computing device can request content from another computing device via the communication network. For example, a computing device can collect various data and utilize a software application to exchange content with a server computing device via the network (e.g., the Internet). Generally described, a variety of vehicles, such as electric vehicles, combustion engine vehicles, hybrid vehicles, etc., can be configured with various sensors and components to facilitate operation of the vehicle or management of one or more systems include in the vehicle. In certain scenarios, a vehicle owner or vehicle user may wish to utilize sensor-based systems to facilitate in the operation of the vehicle. For example, vehicles can often include hardware and software functionality that facilitates location services or can access computing devices that provide location services. In another example, vehicles can also include navigation systems or access navigation components that can generate information related to navigational or directional information provided to vehicle occupants and users. In still further examples, vehicles can include vision systems to facilitate navigational and location services, safety services or other operational services/components. US 2019/096256 A1 discloses vehicles systems for predicting a trajectory of an object proximate to vehicles. WO 2020/245654 A1 discloses systems and methods for vehicle navigation. In one implementation, at least one processor may receive, from a camera, at least one captured image representative of features in an environment of the vehicle. The processor may identify an intersection and a pedestrian in a vicinity of the intersection represented in the image. The processor may determine a navigational action for the vehicle relative to the intersection based on routing information for the vehicle; and determine a predicted path for the vehicle relative to the intersection based on the determined navigational action and a predicted path for the pedestrian based on analysis of the image. The processor may further determine whether the vehicle is projected to collide with the pedestrian based on the projected paths; and, in response, cause a system associated with the vehicle to implement a collision mitigation action. The invention is defined in the claims. BRIEF DESCRIPTION OF THE DRAWINGS This disclosure is described herein with reference to drawings of certain embodiments, which are intended to illustrate, but not to limit, the present disclosure. It is to be understood that the accompanying drawings, which are incorporated in and constitute a part of this specification, are for the purpose of illustrating concepts disclosed herein and may not be to scale. FIG. 1A illustrates an environment that corresponds to vehicles in accordance with one or more aspects of the present application;FIG. 1B an illustrative vision system for a vehicle in accordance with one or more aspects of the present application;FIG. 2 depicts an illustrative architecture for implementing a vision information processing component in accordance with aspects of the present application,FIG. 3 is a flow diagram illustrative of a simulated model content generation routine implemented by a simulated content service in accordance with illustrative embodiments;FIGS. 4A and 4B illustrate representations of potential paths of travel for a detected object in accordance with aspects of the present application; andFIGS. 5A and 5B illustrate two different embodiments for a model of a feasibility cone for a detected object in accordance with aspects of the present application. DETAILED DESCRIPTION Generally described, one or more aspects of the present disclosure relate to the configuration and implementation of vision systems in vehicles. By way of illustrative example, aspects of the present application relate to the configuration and training of machine learned algorithms used in vehicles relying solely on vision systems for various operational functions. Illustratively, the vision-only systems are in contrast to vehicles that may combine vision-based systems with one or more additional sensor systems, such as radar-based systems, LIDAR-based systems, SONAR-systems, and the like. Vision-only systems can be configured with machine learned algorithms that can process inputs solely from vision systems that can include a plurality of cameras mounting on the vehicle. The machine learned algorithm can generate outputs identifying objects and specifying characteristics/attributes of the identified objects, such as position, velocity, acceleration measured relative to the vehicle. Additionally, for dynamic objects that are traveling, one or more aspects relates to providing characterizations of potential paths of travel for the detection objects relative to the ve