CN-122029455-A - Dynamic occupancy grid architecture
Abstract
Techniques are provided for tracking objects proximate to autonomous or semi-autonomous vehicles using a dynamic occupancy grid (DoG). An example method for generating an object trajectory list in a vehicle includes obtaining sensor information from one or more sensors on the vehicle, determining a first object dataset based at least in part on the sensor information and an object recognition process, generating a dynamic grid based at least in part on the sensor information to be based on an environment proximate to the vehicle, determining a second object dataset based at least in part on the dynamic grid, and outputting the object trajectory list based on a fusion of the first object dataset and the second object dataset.
Inventors
- M.P. Johnson Wilson
- A. Josh
- R. D. Gowaika
- V. Slobodian Yuk
- V. Apayadanabalan
Assignees
- 高通股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20240910
- Priority Date
- 20240909
Claims (20)
- 1. A method for generating a list of object trajectories in a vehicle, the method comprising: obtaining sensor information from one or more sensors on the vehicle; determining a first object data set based at least in part on the sensor information and an object recognition process; Generating a dynamic grid based at least in part on the sensor information to be based on an environment proximate the vehicle; determining a second object data set based at least in part on the dynamic grid, and The object trajectory list is output based on a fusion of the first object data set and the second object data set.
- 2. The method of claim 1, wherein the one or more sensors comprise at least a camera and a radar sensor.
- 3. The method of claim 1, wherein determining the second object data set comprises identifying clusters of dynamic grid cells in the dynamic grid.
- 4. A method according to claim 3, wherein the clusters of dynamic grid cells have similar speeds.
- 5. A method according to claim 3, wherein clusters of the dynamic grid cells have similar object classifications.
- 6. The method of claim 1, further comprising generating an occlusion grid comprising occluded grid cells, wherein determining the second object dataset is based at least in part on the occlusion grid.
- 7. An apparatus, the apparatus comprising: At least one memory; one or more sensors; at least one processor communicatively coupled to the at least one memory and the one or more sensors and configured to: obtaining sensor information from the one or more sensors; determining a first object data set based at least in part on the sensor information and an object recognition process; Generating a dynamic grid based at least in part on the sensor information; determining a second object data set based at least in part on the dynamic grid, and An object trajectory list is output based on a fusion of the first object data set and the second object data set.
- 8. The apparatus of claim 7, wherein the one or more sensors comprise at least a camera and a radar sensor.
- 9. The apparatus of claim 7, wherein the at least one processor is further configured to identify clusters of dynamic grid cells in the dynamic grid.
- 10. The apparatus of claim 9, wherein clusters of the dynamic grid cells have similar speeds.
- 11. The apparatus of claim 9, wherein clusters of the dynamic grid cells have similar object classifications.
- 12. The apparatus of claim 7, wherein the at least one processor is further configured to generate an occlusion grid comprising occluded grid cells and to determine the second object data set based at least in part on the occlusion grid.
- 13. The apparatus of claim 7, wherein the at least one memory includes one or more machine learning models, and the at least one processor is further configured to output an indication of an identified object based at least in part on the sensor information and the one or more machine learning models.
- 14. The apparatus of claim 13, wherein the at least one processor is further configured to output an Active Learning (AL) trigger based at least in part on a comparison of the first object data set and the second object data set, and to retrain the one or more machine learning models in response to the AL trigger.
- 15. The apparatus of claim 7, wherein the object trajectory list includes shape information for representing the detected object.
- 16. The apparatus of claim 7, wherein the object trajectory list includes at least a position and a velocity of the detected object.
- 17. The apparatus of claim 7, wherein the at least one processor is further configured to receive map information and to generate the dynamic grid based at least in part on the map information.
- 18. The apparatus of claim 7, wherein the at least one processor is further configured to receive remote sensor information via CV2X network communication and to generate the dynamic grid based at least in part on the remote sensor information.
- 19. The apparatus of claim 18, wherein the remote sensor information is provided by a roadside unit (RSU).
- 20. An apparatus for generating a list of object trajectories in a vehicle, the apparatus comprising: means for obtaining sensor information from one or more sensors on the vehicle; means for determining a first object data set based at least in part on the sensor information and an object recognition process; Means for generating a dynamic grid based at least in part on the sensor information to be based on an environment proximate the vehicle; means for determining a second object data set based at least in part on the dynamic grid, and Means for outputting the object trajectory list based on a fusion of the first object data set and the second object data set.
Description
Dynamic occupancy grid architecture Cross Reference to Related Applications The present application claims the benefit of U.S. patent application Ser. No. 18/827,951, entitled "DYNAMIC OCCUPANCY GRID ARCHITECTURE (dynamic occupied grid architecture)" filed 9 at 2024, which claims the benefit of U.S. provisional application Ser. No. 63/592,596, entitled "DYNAMIC OCCUPANCY GRID ARCHITECTURE (dynamic occupied grid architecture)" filed 24 at 10at 2023, which is assigned to the assignee of the present application and both of which are hereby incorporated by reference in their entirety for all purposes. Background As the industry tends to deploy more and more sophisticated self-driven technologies, which are capable of operating the vehicle with little or no human input, and are therefore semi-autonomous or autonomous, vehicles are becoming more intelligent. Autonomous and semi-autonomous vehicles may be able to detect information about their location and surrounding environment (e.g., using ultrasound, radar, laser radar, SPS (satellite positioning system), and/or odometry, and/or one or more sensors such as accelerometers, cameras, etc.). Autonomous and semi-autonomous vehicles typically include a control system to interpret information about the environment in which the vehicle is located to identify hazards and determine the navigation path to follow. The driver assistance system may mitigate driving risk for a driver of a self-vehicle (i.e., a vehicle configured to perceive the environment of the vehicle) and/or other road users. The driver assistance system may include one or more active devices and/or one or more passive devices that may be used to determine the environment of the self-vehicle and, for semi-autonomous vehicles, may inform the driver of situations that the driver may be able to address. The driver assistance system may be configured to control various aspects of driving safety and/or driver monitoring. For example, the driver assistance system may control the speed of the self-vehicle to maintain at least a desired separation (in distance or time) between the self-vehicle and another vehicle (e.g., as part of an active cruise control system). The driver assistance system may monitor the surroundings of the self-vehicle, for example, to maintain situational awareness of the self-vehicle. This situational awareness may be used to inform the driver of a problem, e.g., another vehicle is at a blind spot of the driver, another vehicle is located in a collision path with the own vehicle, etc. The situational awareness may include information about the self-vehicle (e.g., speed, location, heading) and/or other vehicles or objects (e.g., location, speed, heading, size, object type, etc.). Disclosure of Invention An example method for generating an object trajectory list in a vehicle according to the present disclosure includes obtaining sensor information from one or more sensors on the vehicle, determining a first object data set based at least in part on the sensor information and an object recognition process, generating a dynamic grid based at least in part on the sensor information to be based on an environment proximate the vehicle, determining a second object data set based at least in part on the dynamic grid, and outputting the object trajectory list based on a fusion of the first object data set and the second object data set. An example apparatus according to the present disclosure includes at least one memory, one or more sensors, at least one processor communicatively coupled to the at least one memory and the one or more sensors and configured to obtain sensor information from the one or more sensors, determine a first object data set based at least in part on the sensor information and an object recognition process, generate a dynamic mesh based at least in part on the sensor information, determine a second object data set based at least in part on the dynamic mesh, and output an object trajectory list based on a fusion of the first object data set and the second object data set. Items and/or techniques described herein may provide one or more of the following capabilities, as well as other capabilities not mentioned. Autonomous or semi-autonomous vehicles may include one or more sensors, such as cameras, radars, and lidars. A low-level sensing operation may be performed on information obtained by the sensor. A dynamic occupancy grid may be generated based on input received from the sensors. Clusters of dynamic cells in the dynamic occupancy grid may be identified. The results of the low-level perception operations and the identified dynamic cell clusters may be fused to generate an object trajectory list. The dynamic occupancy grid may be configured to generate a static object list. The object trajectory list and the static object list may be provided to a perception planning module in the vehicle. Fusion of low-level percept detection results with dynamic mesh detection techniques