Search

US-12617435-B2 - Method for behavior planning of an ego vehicle as part of a traffic scene

US12617435B2US 12617435 B2US12617435 B2US 12617435B2US-12617435-B2

Abstract

A computer-implemented method is for planning the behavior of an ego vehicle as part of a traffic scene. The ego vehicle is equipped with an in-vehicle sensor system whose visual range is influenced by a respective current traffic scene. The method includes generating a scene representation of the current traffic scene using scene-specific sensor data captured by the in-vehicle sensor system, and predicting a future development of the traffic scene based on the generated scene representation. The method further includes planning driving maneuvers taking into account the prediction of the future development of the traffic scene. The method predicts the effects of the future development of the traffic scene on the visual range of the in-vehicle sensor system.

Inventors

  • Juergen Mathes
  • Markus Mazzola
  • Martin Stoll
  • Maxim Dolgov

Assignees

  • ROBERT BOSCH GMBH

Dates

Publication Date
20260505
Application Date
20240214
Priority Date
20230222

Claims (9)

  1. 1 . A computer-implemented method for planning a behavior of an ego vehicle as part of a traffic scene, the ego vehicle equipped with an in-vehicle sensor system whose visual range is influenced by a respective current traffic scene, the method comprising: generating a scene representation of a current traffic scene using scene-specific sensor data captured by the in-vehicle sensor system, the generation of the scene representation based on attention information that weights individual sub-areas of a plurality of sub-areas of the current traffic scene, the attention information used to focus computing operations on relevant sub-areas of the plurality of sub-areas of the current traffic scene; predicting a future development of the current traffic scene based on the generated scene representation including predicting effects of the future development of the current traffic scene on the visual range of the in-vehicle sensor system; planning driving maneuvers of the ego vehicle based on the prediction of the future development of the traffic scene; and moving the ego vehicle through the traffic scene according to the planned driving maneuvers, wherein the attention information is compared with information about a current visual range and/or a future visual range of the in-vehicle sensor system, and wherein the driving maneuvers are planned further based on deviations between (i) the relevant sub-areas of the current traffic scene according to the attention information, and (ii) the current visual range and/or the future visual range of the in-vehicle sensor system.
  2. 2 . The method according to claim 1 , further comprising: determining a probability distribution of a location, orientation, and/or dimensioning within the current traffic scene as part of the generated scene representation using the captured scene-specific sensor data for at least one road user and/or at least one object of the current traffic scene.
  3. 3 . The method according to claim 1 , further comprising: generating at least one occupancy model as a part of the generated scene representation using the captured sensor data, the occupancy model representing a mapping of the current traffic scene to a contiguous arrangement of cells and indicating for each of the cells a measure of whether a respective cell is occupied by a road user and/or an object of the current traffic scene, or whether occupancy of the respective cell is unknown.
  4. 4 . The method according to claim 3 , further comprising: generating at least one visual range model as a part of the generated scene representation using the captured sensor data, the at least one visual range model representing a mapping of the current traffic scene to a contiguous arrangement of areas and indicating for each of the areas a probability of whether a respective area is seen by the in-vehicle sensor system.
  5. 5 . The method according to claim 4 , wherein the generation of the at least one occupancy model is performed recurrently and the recurrent generation of the at least one occupancy model is based on the at least one visual range model of at least one preceding time point.
  6. 6 . The method according to claim 4 , wherein the generation of the at least one visual range model is recurrent and the recurrent generation of the at least one visual range model is based on the at least one occupancy model of at least one preceding instant.
  7. 7 . The method according to claim 1 , wherein, when deviations occur between planning-relevant sub-areas of the traffic scene and the current visual range and/or the future visual range of the in-vehicle sensor system, at least one safety mechanism is activated and/or at least one driving maneuver is planned such that the visual range of the in-vehicle sensor system is aligned with at least one planning-relevant sub-area.
  8. 8 . The method according to claim 1 , wherein the driving maneuvers are planned, such that in response to the deviations, the ego vehicle is moved through the traffic scene in a corresponding direction to increase the current visual range and/or the future visual range of the in-vehicle sensor system with respect to a particular sub-area of the current traffic scene, the particular sub-area having been identified as relevant for planning the driving maneuvers based on the attention information.
  9. 9 . A computer-implemented system for behavior planning of an ego vehicle as part of a traffic scene, the ego vehicle equipped with an in-vehicle sensor system whose visual range is influenced by a respective current traffic scene, the system comprising: a controller configured (i) to generate a scene representation of the current traffic scene using scene-specific sensor data acquired by an in-vehicle sensor system, the generation of the scene representation based on attention information that weights individual sub-areas of a plurality of sub-areas of the current traffic scene, the attention information used to focus computing operations on relevant sub-areas of the plurality of sub-areas of the current traffic scene, (ii) to predict a future development of the traffic scene based on the scene representation, (iii) to plan driving maneuvers based on the prediction of the future development of the traffic scene, and (iv) to move the ego vehicle through the traffic scene according to the planned driving maneuvers, wherein the attention information is compared with information about a current visual range and/or a future visual range of the in-vehicle sensor system, and wherein the controller is further configured to plan the driving maneuvers further based on deviations between (i) relevant sub-areas of the current traffic scene according to the attention information, and (ii) the current visual range and/or the future visual range of the in-vehicle sensor system.

Description

This application claims priority under 35 U.S.C. § 119 to patent application no. DE 10 2023 201 580.3, filed on Feb. 22, 2023 in Germany, the disclosure of which is incorporated herein by reference in its entirety. The disclosure relates to a computer-implemented method for behavior planning of an ego vehicle as part of a traffic scene. Furthermore, the disclosure relates to a computer-implemented system for this purpose. BACKGROUND Deep learning (DL)-based prediction and planning approaches often use occupancy maps as intermediate representations within a model to compute planning trajectories or predicted trajectories. However, such approaches ignore the fact that the areas that can be viewed by the sensors of the automated driving (AD) vehicle are limited. This manifests itself in an increasing uncertainty of the predicted occupancy over time. However, it is not resolved whether the uncertainty increases because the expectation of the movement of a vehicle is uncertain, or whether it increases because the area in front of a vehicle is not visible and therefore unsafe. SUMMARY Features and details that are described in connection with the method according to the disclosure naturally also apply in connection with the system according to the disclosure, and vice versa, so that reference is or can always be made to the individual aspects of the disclosure with respect to the disclosure. The object of the disclosure is, in particular, a computer-implemented method for planning the behavior of an ego vehicle as part of a traffic scene. The ego vehicle can be equipped with an in-vehicle sensor system whose visual range is influenced by the current traffic scene. The method may comprise at least the following method steps, which are preferably carried out successively and/or repeatedly: generating a scene representation of the current traffic scene using scene-specific sensor data, which has been recorded in particular with the help of the in-vehicle sensor system,predicting a future development of the traffic scene on the basis of the scene representation, hereinafter also referred to as prediction, andplanning of driving maneuvers taking into account the prediction of the future development of the traffic scene, hereinafter also referred to as planning for short. According to the disclosure, it is envisaged that the effects of the future development of the traffic scene on the visual range of the in-vehicle sensor system are predicted as part of the prediction. This makes it possible to plan the driving maneuver based on the prediction, even taking into account a change in the future visual range of the in-vehicle sensor system. In this way, the effect of the future development of the traffic scene on the visual range can act as an additional source of information for the prediction, which significantly improves the quality of the prediction. Advantageously, other scene-specific information is also aggregated and used to generate the scene representation, e.g., map information, GPS data, weather data and the like. This makes it possible to reliably plan driving maneuvers for the current traffic scene based on the scene representation. It is also conceivable that the prediction is made using a machine learning model. The machine learning model may comprise at least or exactly one artificial neural network trained for this purpose, preferably a convolutional neural network (CNN). The scene-specific sensor data can be recorded, for example, by at least one sensor of the in-vehicle sensor system, such as a camera and/or a LIDAR sensor and/or a radar sensor and/or an ultrasonic sensor, detecting the surroundings of the ego vehicle. It is also conceivable that information about the current and/or future visual range of the in-vehicle sensor system could be taken into account during planning. It is possible that the visual range of the in-vehicle sensor system is temporarily reduced due to an area of the surroundings being obscured by at least one object in the surroundings and/or by other influencing factors. It is then possible that other objects cannot be detected immediately due to the reduction in the visual range. This is where the method according to the disclosure creates an improvement, as such uncertainty can be taken into account by changing the visual range when planning driving maneuvers. In addition, it may be provided that a probability distribution of the location, orientation and/or dimensioning within the traffic scene is determined as part of the scene representation using the recorded sensor data for at least one road user and/or at least one object in the traffic scene. This makes it possible to base the planning of driving maneuvers on a wide range of information. It is also conceivable that at least one occupancy model is generated as part of the scene representation using the recorded sensor data, wherein the occupancy model represents a mapping of the traffic scene to a contiguous arrangement