EP-4736134-A1 - SYSTEM AND METHODS FOR PROVIDING DRIVER ASSISTANCE ALERTS USING AN END-TO-END ARTIFICIALLY INTELLIGENT COLLISION AVOIDANCE SYSTEM AND ADVANCED DRIVER ASSISTANCE SYSTEMS
Abstract
The technology disclosed teaches a system and methods for providing driver assistance alerts to a driver using an end-to-end artificially-intelligent advanced driver assistance system. The technology disclosed further includes receiving environmental data for a sequence of driving states including at least video from a camera, returns from an optical sensor, and location data from a GNSS receiver, wherein the camera, the optical sensor, and the GNSS receiver are coupled to a processor carried by a vehicle, processing the environmental data as input to an end-to-end neural network, wherein the end-to-end neural network is trained to generate prescriptive steering and speed control actions in response to a present driving state, analyzing hidden layer data and output data from the end-to-end neural network to estimate collision avoidance data, and presenting, to the driver, a user interface including driver assistance alerts based on the collision avoidance data.
Inventors
- KENTLEY-KLAY, TIM
- DUVAUD, Werner
- HAINAUT, Aurèle
- DELOCHE, Maxime
- CARRÉ, Ludovic
Assignees
- HYPRLABS, INC.
Dates
- Publication Date
- 20260506
- Application Date
- 20240629
Claims (15)
- 1. A computer-implemented method of providing driver assistance alerts to a driver, the method including: receiving environmental data for a sequence of driving states including at least video from a camera, returns from an optical sensor, and location data from a GNSS receiver, wherein the camera, the optical sensor, and the GNSS receiver are coupled to a processor carried by a vehicle; processing the environmental data as input to an end-to-end neural network, wherein the end-to- end neural network is trained to generate prescriptive steering and speed control actions in response to a present driving state; analyzing hidden layer data and output data from the end-to-end neural network to estimate collision avoidance data, wherein the collision avoidance data includes, at least: one or more detected objects within the video from the camera, a directional cue, wherein the directional cue is a projection overlay based on the prescriptive steering control actions onto a heads up display, and a risk metric that quantifies a dissimilarity between the generated prescriptive steering and speed control actions, and received driver steering and speed control actions; and presenting, to the driver, a user interface including driver assistance alerts based on the collision avoidance data.
- 2. The computer-implemented method of claim 1, wherein the directional cue projected onto the heads up display within the user interface is a dynamic whisker arrow indicating a prescriptive vehicle orientation relative to a current vehicle orientation based on the generated prescriptive steering control actions.
- 3. The computer-implemented method of claim 1 or claim 2, wherein obtaining the risk metric further includes: calculating a cross entropy between the generated prescriptive steering and speed control actions and current driver steering and speed control actions and standardizing the cross entropy calculation to generate a risk metric output, wherein the risk metric output is a proxy for an imminent collision risk and increases proportionally as the current driver steering and speed control actions deviate further from the generated prescriptive steering and speed control actions.
- 4. The computer-implemented method of any of claims 1-3, further including, in response to the risk metric output: maintaining a manual driving mode while the risk metric output is less than a pre-determined threshold value, wherein the manual driving mode includes permitting the vehicle to apply the current driver steering and speed control input actions, and engaging in an autonomous driving mode when the risk metric output is equal to or greater than the pre-determined threshold value, wherein the autonomous driving mode includes causing the vehicle to apply the prescriptive steering and speed control input actions.
- 5. The computer-implemented method of any of claims 1-4, further including categorizing the risk metric into risk levels by defining a particular risk level as including risk metric outputs within a pre-determined range between a lower boundary value and an upper boundary value.
- 6. The computer-implemented method of any of claims 1-5, wherein the user interface presents, to the driver, a quantitative risk including the risk metric output or a categorical risk including the risk level based on the risk metric value.
- 7. The computer-implemented method of any claims 1-6, wherein the driver assistance alerts include one or more of a visual display, an audio signal, or a haptic signal.
- 8. The computer-implemented method of any of claims 1-7, wherein the user interface further presents the generated prescriptive steering and speed control actions to the driver.
- 9. The computer-implemented method of any of claims 1-8, wherein the environmental data is pre-processed prior to being provided to the end-to-end neural network, the pre-processing further including: tokenizing the environmental data for the present driving state to generate environmental data tokens, mapping the environmental data tokens to a reduced dimensional vector space to produce environmental data embeddings, and adding the environmental data embeddings to positional embeddings to generate input embeddings for the end-to-end neural network, wherein the positional embeddings preserve spatial information for the environmental data tokens.
- 10. The computer-implemented method of any of claims 1-9, wherein the end-to-end neural network is a transformer model trained for end-to-end autonomous driving, and the transformer model processing the environmental data further includes: processing the generated input embeddings combined with compressed embeddings from nine or more earlier driving states over at least three seconds and generating, as output, a compressed embedding for the present driving state and prescriptive steering and speed control actions in response to the present driving state.
- 11. The computer-implemented method of any of claims 1-10, further including extracting a set of attention weights from the transformer model, and generating, using the positional embeddings, an attention map including a projection of the extracted attention weights, wherein: a magnitude of a particular attention weight increases proportionally to an importance of the particular attention weight in generating the prescriptive steering and speed actions, and an object is implicitly detected within a region of an area of real space surrounding the vehicle based on (i) a comparison of an average attention weight value within the region and another average attention weight value within one or more adjacent regions and (ii) the positional embeddings.
- 12. The computer-implemented method of any of claims 1-11, wherein presenting, via the user interface, the one or more detected objects within the heads up display further includes color-coding attention weights within the attention map enabling visual identification of implicitly detected objects and projecting an overlay of the color-coded attention map onto a heads up display.
- 13. The computer-implemented method of any of claims 1-12, further including storing a history of the video from the camera and the driver assistance alerts presented to the driver within a driving database, wherein the driving database is available for additional data analysis and data auditing after a driving activity is completed.
- 14. A computer-implemented method of training a neural network to generate driver assistance alert data, the method including: receiving environmental data for a sequence of driving states resulting from human driving, including at least video from a camera, returns from an optical sensor, and location data from a GNSS receiver, wherein the camera, the optical sensor, and the GNSS receiver are coupled to a processor carried by a vehicle; processing the environmental data as input to imitation training of an end-to-end neural network, including training the end-to-end neural network to generate prescriptive steering and speed control actions in response to a present driving state; wherein the training includes analyzing hidden layer data and output data from the end-to-end neural network to estimate collision avoidance data, wherein the collision avoidance data includes, at least: a directional cue, whereby the directional cue can be projected as a prescriptive steering control action onto a heads up display, and a speed control cue, whereby the speed control cue can be projected as a prescriptive speed control action onto a heads up display; whereby attention weights of the end-to-end neural network in the hidden layer data indicate areas of the video from the camera that contribute most significantly to the generated prescriptive steering and speed control actions.
- 15. The method of any of claims 1-14, further including configuring a system including the end-to-end neural network and further including a risk metric generator, including: training parameters of the risk metric generator to generate a normalized risk metric that quantifies a dissimilarity between the generated prescriptive steering and speed control actions and received driver steering and speed control actions that vary from the generated prescriptive steering and speed control actions, whereby the normalized risk metric onto a heads up display.
Description
SYSTEM AND METHODS FOR PROVIDING DRIVER ASSISTANCE ALERTS USING AN END-TO-END ARTIFICIALLY INTELLIGENT COLLISION AVOIDANCE SYSTEM AND ADVANCED DRIVER ASSISTANCE SYSTEMS PRIORITY APPLICATION [0001] This application claims priority to and the benefit of U.S. Patent Application No. 18/731,115 filed 31 May 2024, titled “System and Methods for Providing Driver Assistance Alerts Using an End-To-End Artificially Intelligent Collision Avoidance System and Advanced Driver Assistance Systems” (Atty. Docket No. HYPR 1002-1) which claims priority to U.S. Provisional Application 63/524,213 filed 29 June 2023, titled “Scalable Training and Validation for an End-To-End Autonomous Driving Model” (Atty. Docket No. HYPR 1001-1). RELATED CASES [0002] This application is related to contemporaneously filed U.S. CIP Application No. > , filed titled “System and Methods For Providing Driver Assistance Alerts Using an End-To-End Artificially Intelligent Collision Avoidance System and Advanced Driver Assistance Systems” (Atty Docket No. HYPR 1002-3), which is incorporated by reference for all purposes. [0003] This application is also related to the following commonly owned applications, all of which are incorporated by reference for all purposes: [0004] US Patent Application No. 18/431,827, filed 2 February 2024, titled “Multi- Functional Inventory Storage and Delivery System” (Atty. Docket No. HYPR 1000-2); and [0005] U.S. Provisional Application 63/443,342 filed 3 February 2023, titled “Multi- Functional Inventory Storage and Delivery System” (Atty. Docket No. HYPR 1000-1). FIELD OF THE TECHNOLOGY DISCLOSED [0006] The technology disclosed relates to end-to-end neural networks configured for autonomous and semi-autonomous driving. In particular, the technology disclosed relates to a scalable method and apparatus for training and validating an end-to-end network configured for autonomous and semi-autonomous driving. BACKGROUND [0007] The subject matter discussed in this section should not be assumed to be prior art merely as a result of its mention in this section. Similarly, a problem mentioned in this section or associated with the subject matter provided as background should not be assumed to have been previously recognized in the prior art. The subject matter in this section merely represents different approaches, which in and of themselves can also correspond to implementations of the technology disclosed. [0008] Autonomous driving technology, appealing for its benefits in driver satisfaction and safety, is already evident in semi -automated advanced driver assistance systems (ADAS) for tasks like lane changing, speed control, and parking. These advancements not only enhance driver convenience and comfort but also hold promise for public safety, infrastructure, and vehicle durability by reducing accidents. Additionally, autonomous driving technology extends to various robotic applications such as space probes, industrial robots, military drones, and delivery robots, addressing concerns in efficiency, cost, quality, and environmental impact. For example, the E-commerce industry can benefit from the use of autonomous delivery robots that improve upon efficiency, cost, quality, and environmental impacts of traditional delivery methods. [0009] Despite the decades of research on autonomous vehicle development, fully autonomous vehicles are not yet available for individual use on the market. Waymo has made progress on its autonomous fleet, but only for taxi service, so far. Although progress is substantial, safety and reliability are still lacking. Traditional autonomous driving systems, characterized by an aggregation of independent submodules, are challenging to optimize due to the enormous volume of data necessary to train these models. Furthermore, the manual labelling of this data necessary for the artificial intelligence systems configured for traditional autonomous driving is expensive. Many data formats required by traditional autonomous driving systems, such as pre-built maps, are not only expensive to construct and label, but pose risks to safety and generalizability due to the limited capacity to react in situations where the real-world environment does not correlate to the map as expected. [0010] The drawbacks associated with traditional methods have created an opportunity for development of an end-to-end (E2E) learning approach for autonomous driving. E2E autonomous driving typically consists of a single, self-contained deep learning model that maps sensory input, such as image frames from a camera or maps generated by light detection and ranging (LiDAR), to steering wheel and accelerator actuation for vehicle control. E2E autonomous driving systems and methods can be configured to learn via reinforcement learning approaches, such as imitation learning, rather than depending on an aggregation of manually designed tasks. Successful training of an E2E autonomous driving approach using imitation learning must be capable o