US-12625257-B2 - Modifying behavior of autonomous vehicles based on sensor blind spots and limitations
Abstract
Models can be generated of a vehicle's view of its environment and used to maneuver the vehicle. This view need not include what objects or features the vehicle is actually seeing, but rather those areas that the vehicle is able to observe using its sensors if the sensors were completely un-occluded. For example, for each of a plurality of sensors of the object detection component, a computer may generate an individual 3D model of that sensor's field of view. Weather information is received and used to adjust one or more of the models. After this adjusting, the models may be aggregated into a comprehensive 3D model. The comprehensive model may be combined with detailed map information indicating the probability of detecting objects at different locations. The model of the vehicle's environment may be computed based on the combined comprehensive 3D model and detailed map information.
Inventors
- Dmitri A. Dolgov
- Christopher Paul Urmson
Assignees
- WAYMO LLC
Dates
- Publication Date
- 20260512
- Application Date
- 20230703
Claims (20)
- 1 . A method comprising: receiving, by a computing system in a vehicle, information concerning a weather condition in an environment of the vehicle, wherein the computing system is configured to operate the vehicle based on data from at least one sensor coupled to the vehicle, wherein the at least one sensor is configured to detect objects in the environment of the vehicle; determining, by the computing system in the vehicle, an impact of the weather condition on the at least one sensor, wherein determining the impact of the weather condition on the at least one sensor comprises: determining a model for the at least one sensor, wherein the model relates to an ability of the at least one sensor to observe the environment of the vehicle; and adjusting one or more characteristics of the model based on the weather condition; and modifying, by the computing system in the vehicle, a driving behavior of the vehicle based on the determined impact of the weather condition on the at least one sensor.
- 2 . The method of claim 1 , wherein the weather condition is an actual weather condition.
- 3 . The method of claim 1 , wherein the weather condition is an expected weather condition.
- 4 . The method of claim 1 , wherein the weather condition comprises at least one of a fog density, a rain intensity, a ground wetness, a sun intensity, or a sun direction.
- 5 . The method of claim 1 , wherein the at least one sensor comprises at least one of a camera, a radar unit, or a laser.
- 6 . The method of claim 1 , wherein modifying, by the computing system in the vehicle, the driving behavior of the vehicle based on the determined impact of the weather condition on the at least one sensor comprises: causing, by the computing system in the vehicle, the vehicle to slow down.
- 7 . The method of claim 1 , wherein modifying, by the computing system in the vehicle, the driving behavior of the vehicle based on the determined impact of the weather condition on the at least one sensor comprises: causing, by the computing system in the vehicle, the vehicle to move to a position that improves a view of the at least one sensor.
- 8 . The method of claim 1 , wherein modifying, by the computing system in the vehicle, the driving behavior of the vehicle based on the determined impact of the weather condition on the at least one sensor comprises: operating, by the computing system in the vehicle, the vehicle to avoid a certain type of maneuver.
- 9 . The method of claim 1 , wherein the model includes dimensions of a three-dimensional field of view of the at least one sensor.
- 10 . The method of claim 9 , wherein the model includes information indicating a confidence of detecting objects within the three-dimensional field of view of the at least one sensor.
- 11 . A system comprising: at least one sensor coupled to a vehicle, wherein the at least one sensor is configured to detect objects in an environment of the vehicle; at least one processor; at least one memory; instructions stored in the at least one memory and executable by the at least one processor to perform operations comprising: receiving information concerning a weather condition in the environment of the vehicle; determining an impact of the weather condition on the at least one sensor, wherein determining the impact of the weather condition on the at least one sensor comprises: determining a model for the at least one sensor, wherein the model relates to an ability of the at least one sensor to observe the environment of the vehicle; and adjusting one or more characteristics of the model based on the weather condition; and modifying a driving behavior of the vehicle based on the determined impact of the weather condition on the at least one sensor.
- 12 . The system of claim 11 , wherein the weather condition is an actual weather condition.
- 13 . The system of claim 11 , wherein the weather condition is an expected weather condition.
- 14 . The system of claim 11 , wherein the weather condition comprises at least one of a fog density, a rain intensity, a ground wetness, a sun intensity, or a sun direction.
- 15 . The system of claim 11 , wherein the at least one sensor comprises at least one of a camera, a radar unit, or a laser.
- 16 . The system of claim 11 , wherein modifying the driving behavior of the vehicle based on the determined impact of the weather condition on the at least one sensor comprises causing the vehicle to slow down.
- 17 . The system of claim 11 , wherein modifying the driving behavior of the vehicle based on the determined impact of the weather condition on the at least one sensor comprises causing the vehicle to move to a position that improves a view of the at least one sensor.
- 18 . The system of claim 11 , wherein modifying the driving behavior of the vehicle based on the determined impact of the weather condition on the at least one sensor comprises operating the vehicle to avoid a certain type of maneuver.
- 19 . The system of claim 11 , wherein the model includes dimensions of a three-dimensional field of view of the at least one sensor.
- 20 . The system of claim 19 , wherein the model includes information indicating a confidence of detecting objects within the three-dimensional field of view of the at least one sensor.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. application Ser. No. 17/512,770, filed Oct. 28, 2021, which is a continuation of U.S. application Ser. No. 16/773,606, filed Jan. 27, 2020, which is a continuation of U.S. application Ser. No. 15/718,794, filed Sep. 28, 2017, which is a continuation of U.S. application Ser. No. 15/137,120, filed on Apr. 25, 2016, which is a continuation of U.S. application Ser. No. 13/749,793, filed on Jan. 25, 2013. The foregoing applications are incorporated herein by reference. BACKGROUND Autonomous vehicles use various computing systems to aid in the transport of passengers from one location to another. Some autonomous vehicles may require some initial input or continuous input from an operator, such as a pilot, driver, or passenger. Other systems, for example autopilot systems, may be used only when the system has been engaged, which permits the operator to switch from a manual mode (where the operator exercises a high degree of control over the movement of the vehicle) to an autonomous mode (where the vehicle essentially drives itself) to modes that lie somewhere in between. Such vehicles are equipped with various types of sensors in order to detect objects in the surroundings. For example, autonomous vehicles may include lasers, sonar, radar, cameras, and other devices which scan and record data from the vehicle's surroundings. These devices in combination (and in some cases alone) may be used to build 3D models of the objects detected in the vehicle's surrounding. In addition to modeling and detecting objects in the vehicle's surroundings, autonomous vehicles need to reason about the parts of the world that are not seen by these sensors (e.g., due to occlusions) to drive safely. Without taking into account the limitations of these sensors, this may lead to dangerous maneuvers such as passing around blind corners, moving into spaces that are partially occluded by other objects, etc. SUMMARY One aspect of the disclosure provides a method. The method includes generating, for each given sensor of a plurality of sensors for detecting objects in a vehicle's environment, a 3D model of the given sensor's field of view; receiving weather information including one or more of reports, radar information, forecasts and real-time measurements concerning actual or expected weather conditions in the vehicle's environment; adjusting one or more characteristics of the plurality of 3D models based on the received weather information to account for an impact of the actual or expected weather conditions on one or more of the plurality of sensors; after the adjusting, aggregating, by a processor, the plurality of 3D models to generate a comprehensive 3D model; combining the comprehensive 3D model with detailed map information; and using the combined comprehensive 3D model with detailed map information to maneuver the vehicle. In one example, the 3D model of each given sensor's field of view is based on a pre-determined model of the given sensor's unobstructed field of view. In another example, the 3D model for each given sensor's field of view is based on the given sensor's location and orientation relative to the vehicle. In another example, the weather information is received from a remote computer via a network. In another example, the weather information is received from one of the plurality of sensors. In another example, at least one model of the plurality of 3D models includes probability data indicating a probability of detecting an object at a given location of the at least one model, and this probability data is used when aggregating the plurality of 3D models to generate the comprehensive 3D model. In another example, the detailed map information includes probability data indicating a probability of detecting an object at a given location of the map, and this probability data is used when combining the comprehensive 3D model with detailed map information. In another example, combining the comprehensive 3D model with detailed map information results in a model of the vehicle's environment annotated with information describing whether various portions of the environment are occupied, unoccupied, or unobserved. Another aspect of the disclosure provides a system. The system includes a processor configured to generate, for each given sensor of a plurality of sensors for detecting objects in a vehicle's environment, a 3D model of the given sensor's field of view; receive weather information including one or more of reports, radar information, forecasts and real-time measurements concerning actual or expected weather conditions in the vehicle's environment; adjust one or more characteristics of the plurality of 3D models based on the received weather information to account for an impact of the actual or expected weather conditions on one or more of the plurality of sensors; after the adjusting, aggregate the plurality of 3D models to generate a compreh