CN-122003620-A - Automated driving SOTIF via signaling
Abstract
Techniques are provided for detecting an object proximate to a vehicle having a plurality of signal paths. An example method for generating an object representation having a plurality of signal paths includes obtaining image information from at least one camera module disposed on a vehicle, obtaining target information from at least one radar module disposed on the vehicle, generating a first detection representation having a first signal path based on the image information and the target information, generating a second detection representation having a second signal path based on the image information and the target information, wherein the second signal path is different from the first signal path, and outputting the first detection representation and the second detection representation.
Inventors
- M.P. Johnson Wilson
- A.K. Sadek
- A. Josh
- J. Poplowski
- V. Apayadanabalan
Assignees
- 高通股份有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20240906
- Priority Date
- 20240905
Claims (20)
- 1. An apparatus, the apparatus comprising: At least one memory; At least one camera module; at least one radar module; At least one processor communicatively coupled to the at least one memory, the at least one camera module, and the at least one radar module and configured to: Obtaining image information from the at least one camera module disposed on the vehicle; Obtaining target information from the at least one radar module disposed on the vehicle; generating a first detection representation having a first signal path based on the image information and the target information; generating a second detection representation having a second signal path based on the image information and the target information, wherein the second signal path is different from the first signal path, and Outputting the first detection representation and the second detection representation.
- 2. The apparatus of claim 1, wherein the first detection representation comprises a parametric representation of a target object and the second detection representation comprises a non-parametric representation of the target object.
- 3. The apparatus of claim 2, wherein the parametric representation of the target object comprises coordinate information of the target object and size information of the target object.
- 4. The apparatus of claim 2, wherein the non-parametric representation of the target object is an occupancy map.
- 5. The apparatus of claim 2, wherein the first signal path comprises at least a first machine learning model and the at least one processor is further configured to generate the parametric representation based at least in part on the image information and the target information, and the second signal path comprises at least a second machine learning model and the at least one processor is further configured to generate the non-parametric representation based at least in part on the image information and the target information.
- 6. The apparatus of claim 5, wherein the first machine learning model and the second machine learning model utilize a common backbone network.
- 7. The apparatus of claim 5, wherein the first machine learning model utilizes at least a first backbone network and the second machine learning model utilizes at least a second backbone network.
- 8. The apparatus of claim 1, wherein the at least one processor is further configured to: receiving the first detection representation via the first signal path and the second detection representation via the second signal path; Generating one or more object lists based at least in part on the first detection representation and the second detection representation, and Outputting the one or more object lists.
- 9. The apparatus of claim 8, wherein the one or more object lists comprise an object trajectory list indicating a position and a velocity of an object.
- 10. The apparatus of claim 9, wherein the object trajectory list indicates a shape of the object.
- 11. The apparatus of claim 9, wherein the one or more object lists comprise static object information.
- 12. The apparatus of claim 8, wherein the at least one processor is further configured to output the one or more object lists to an environment model.
- 13. The apparatus of claim 8, further comprising at least one lidar module disposed on the vehicle, wherein the at least one processor is further configured to: receiving further target information from the at least one lidar module via a secondary path separate from the first signal path and the second signal path; generating object detection information based on the further object information, and And outputting the object detection information.
- 14. The apparatus of claim 13, wherein the at least one processor is further configured to: generating the object detection information based on the target information and the image information, and And outputting the object detection information.
- 15. A method for generating an object representation having a plurality of signal paths, the method comprising: Obtaining image information from at least one camera module disposed on the vehicle; Obtaining target information from at least one radar module disposed on the vehicle; generating a first detection representation having a first signal path based on the image information and the target information; generating a second detection representation having a second signal path based on the image information and the target information, wherein the second signal path is different from the first signal path, and Outputting the first detection representation and the second detection representation.
- 16. The method of claim 15, wherein the first detection representation comprises a parametric representation of a target object and the second detection representation comprises a non-parametric representation of the target object.
- 17. The method of claim 16, wherein the parametric representation of the target object includes coordinate information of the target object and size information of the target object.
- 18. The method of claim 16, wherein the non-parametric representation of the target object is an occupancy map.
- 19. The method of claim 16, wherein the first signal path comprises at least a first machine learning model configured to generate the parametric representation based at least in part on the image information and the target information, and the second signal path comprises at least a second machine learning model configured to generate the non-parametric representation based at least in part on the image information and the target information.
- 20. An apparatus for generating an object representation having a plurality of signal paths, the apparatus comprising: Means for obtaining image information from at least one camera module disposed on the vehicle; Means for obtaining target information from at least one radar module disposed on the vehicle; means for generating a first detection representation having a first signal path based on the image information and the target information; means for generating a second detected representation having a second signal path based on the image information and the target information, wherein the second signal path is different from the first signal path, and And means for outputting the first detection representation and the second detection representation.
Description
Automated driving SOTIF via signaling Cross Reference to Related Applications The present application claims the benefit of U.S. patent application Ser. No. 18/825,645, entitled "AUTOMATED DRIVING SOTIF VIA SIGNAL REPRESENTATION (signaled autopilot SOTIF)", filed on 5, 9, 2024, which claims the benefit of U.S. provisional application Ser. No. 63/590,899, entitled "AUTOMATED DRIVING SOTIF VIA SIGNAL REPRESENTATION (signaled autopilot SOTIF)", filed on 17, 10, 2023, which are assigned to the assignee of the present application and are hereby incorporated by reference in their entirety for all purposes. Background As the industry tends to deploy more and more sophisticated self-driven technologies, which are capable of operating the vehicle with little or no human input, and are therefore semi-autonomous or autonomous, vehicles are becoming more intelligent. Autonomous and semi-autonomous vehicles may be able to detect information about their location and surrounding environment (e.g., using ultrasound, radar, laser radar, SPS (satellite positioning system), and/or odometry, and/or one or more sensors such as accelerometers, cameras, etc.). Autonomous and semi-autonomous vehicles typically include a control system to interpret information about the environment in which the vehicle is located to identify hazards and determine the navigation path to follow. The design of the autonomous vehicle may utilize industry standards to guide the checksum verification measures required to achieve the Security (SOTIF) of the intended functionality. SOTIF is generally defined as the absence of unreasonable risks due to risks caused by a malfunction of the intended functionality or by reasonably predictable misuse by a person. Industry standards such as the international organization for standardization (ISO) 21448 standard may provide additional requirements for implementation SOTIF in autonomous and semi-autonomous vehicles. Disclosure of Invention An example method for generating an object representation having a plurality of signal paths according to the present disclosure includes obtaining image information from at least one camera module disposed on a vehicle, obtaining target information from at least one radar module disposed on the vehicle, generating a first detection representation having a first signal path based on the image information and the target information, generating a second detection representation having a second signal path based on the image information and the target information, wherein the second signal path is different from the first signal path, and outputting the first detection representation and the second detection representation. An example apparatus according to the present disclosure includes at least one memory, at least one camera module, at least one radar module, at least one processor communicatively coupled to the at least one memory, the at least one camera, and the at least one radar module and configured to obtain image information from the at least one camera module disposed on the vehicle, obtain target information from the at least one radar module disposed on the vehicle, generate a first detection representation having a first signal path based on the image information and the target information, generate a second detection representation having a second signal path based on the image information and the target information, wherein the second signal path is different from the first signal path, and output the first detection representation and the second detection representation. Items and/or techniques described herein may provide one or more of the following capabilities, as well as other capabilities not mentioned. Multiple sensors such as cameras, radar, and lidar may obtain target information of objects adjacent to autonomous or semi-autonomous vehicles. Sensor inputs may be evaluated via different signal paths. The machine learning model may be implemented along different signal paths. Parametric and non-parametric representations of object data may be generated. Fusion of signals from different sensors may improve the sensitivity of object detection. Multiple signal paths may improve the robustness of object detection and corresponding environmental models. The SOTIF standard may be implemented. Other capabilities may be provided, and not every implementation according to the present disclosure must provide any of the capabilities discussed, let alone all of the capabilities. Drawings FIG. 1 is a top view of an example self-vehicle. FIG. 2 is a block diagram of components of an example device that the self-vehicle shown in FIG. 1 may be an example of. Fig. 3 is a block diagram illustrating components of a transmission/reception point. Fig. 4 is a block diagram of components of a server. Fig. 5 is a block diagram of an example device. FIG. 6 is a diagram of an example geographic environment. FIG. 7 is a diagram of the geographic environment shown in FIG. 6 divided i