Search

US-20260125054-A1 - SENSOR VISIBILITY ESTIMATION

US20260125054A1US 20260125054 A1US20260125054 A1US 20260125054A1US-20260125054-A1

Abstract

In various examples, systems and methods are disclosed that use one or more machine learning models (MLMs)—such as deep neural networks (DNNs)—to compute outputs indicative of an estimated visibility distance corresponding to sensor data generated using one or more sensors of an autonomous or semi-autonomous machine. Once the visibility distance is computed using the one or more MLMs, a determination of the usability of the sensor data for one or more downstream tasks of the machine may be evaluated. As such, where an estimated visibility distance is low, the corresponding sensor data may be relied upon for less tasks than when the visibility distance is high.

Inventors

  • Abhishek Bajpayee
  • Arjun Gupta
  • George Tang
  • Hae-Jong Seo

Assignees

  • NVIDIA CORPORATION

Dates

Publication Date
20260507
Application Date
20260105

Claims (20)

  1. 1 . A method comprising: determining, using one or more machine learning models and based at least on sensor data obtained using one or more sensors of a machine, a visibility distance corresponding to the sensor data; identifying, based at least on the visibility distance, a usability of the sensor data to perform one or more planning, navigation, or control operations of the machine, the usability identified based at least on comparing the visibility distance to a plurality of different usability options individually associated with one or more predefined visibility distances; and causing the machine to perform the one or more planning, navigation, or control operations in accordance with the identified usability of the sensor data.
  2. 2 . The method of claim 1 , wherein the one or more predefined visibility distances correspond to one or more maximum distance values from one or more predefined ranges of values.
  3. 3 . The method of claim 1 , wherein the identifying the usability of the sensor data to perform the one or more planning, navigation, or control operations of the machine comprises: determining, based at least on the comparing the visibility distance to the plurality of different usability options, a usability option associated with a predefined visibility distance; and determining the usability of the sensor data to perform the one or more planning, navigation, or control operations based at least on the predefined visibility distance.
  4. 4 . The method of claim 1 , wherein the identifying the usability of the sensor data to perform the one or more planning, navigation, or control operations of the machine comprises: determining, based at least on the comparing the visibility distance to the plurality of different usability options, a usability option associated with a predefined range that includes a predefined visibility distance; and determining the usability of the sensor data to perform the one or more planning, navigation or control operations based at least on the predefined range.
  5. 5 . The method of claim 1 , further comprising: determining, based at least on the sensor data, an output that is associated with a distance value within an environment, wherein the identifying the usability of the sensor data to perform the one or more planning, navigation, or control operations is further based at least on the distance value associated with the output.
  6. 6 . The method of claim 1 , further comprising: determining, based at least on the sensor data, an output that is associated with a distance value within an environment, wherein the identifying the usability of the sensor data to perform the one or more planning, navigation, or control operations of the machine comprises: determining, based at least on the comparing the visibility distance to the plurality of different usability options, a usability option associated with a predefined visibility distance; and determining the usability of the sensor data to perform the one or more planning, navigation, or control operations based at least on the predefined visibility distance and the distance value.
  7. 7 . The method of claim 6 , wherein the determining the usability of the sensor data to perform the one or more planning, navigation, or control operations comprises determining that the sensor data is usable to perform the one or more planning, navigation, or control operations based at least on the distance value being less than the predefined visibility distance.
  8. 8 . A system comprising: one or more processors to: determine, using one or more machine learning models and based at least on sensor data obtained using a sensor of a machine, a predefined visibility distance that is associated with the sensor data; determine, based at least on the predefined visibility distance, a usability of the sensor data to perform one or more operations of the machine; and cause the machine to perform at least one operation of the one or more operations in view of the usability of the sensor data.
  9. 9 . The system of claim 8 , wherein the predefined visibility distance corresponds to a maximum distance value from a predefined range of values that is associated with the one or more operations.
  10. 10 . The system of claim 8 , wherein the determination of the predefined visibility distance that is associated with the one or more operations comprises: determining, using the one or more machine learning models and based at least on the sensor data, a visibility distance value associated with the sensor data; and determining, based at least on the visibility distance value, the predefined visibility distance that is associated with the sensor data.
  11. 11 . The system of claim 8 , wherein the one or more processors are further to: determine a plurality of predefined visibility distances associated with the sensor, the plurality of predefined visibility distances including at least the predefined visibility distance associated with the one or more operations, wherein the predefined visibility distance associated with the sensor data is further determined based at least on the plurality of predefined visibility distances.
  12. 12 . The system of claim 11 , wherein the plurality of predefined visibility distances further includes a second predefined visibility distance that is associated with one or more second operations of the machine, the one or more second operations being different than the one or more operations.
  13. 13 . The system of claim 8 , wherein the one or more processors are further to: determine, based at least on the sensor data, an output that is associated with a distance value within an environment, wherein the usability of the sensor data to perform the one or more operations of the machine is further determined based at least on the distance value associated with the output.
  14. 14 . The system of claim 13 , wherein the determination of the usability of the sensor data to perform the one or more operations of the machine comprises determining to use the sensor data to perform the one or more operations of the machine based at least on the distance value being less than the predefined visibility distance.
  15. 15 . The system of claim 8 , wherein the one or more operations include at least one of object tracking, object detection, path planning, obstacle avoidance, or an advanced driver assistance system operation.
  16. 16 . The system of claim 8 , wherein the one or more machine learning models are trained, at least, by: determining, using the one or more machine learning models and based at least on training sensor data, one or more distance values associated with the training sensor data; comparing the one or more distance values to one or more ground truth distance values corresponding to one or more visibility distances associated with the training sensor data; and updating the one or more machine learning models based at least on the comparing the one or more distance values to the one or more ground truth distance values.
  17. 17 . The system of claim 10 , wherein the system is included in at least one of: a control system for an autonomous or semi-autonomous machine; a perception system for an autonomous or semi-autonomous machine; a system for performing simulation operations; a system for performing deep learning operations; a system implemented using an edge device; a system implemented using a robot; a system incorporating one or more virtual machines (VMs); a system implemented at least partially in a data center; or a system implemented at least partially using cloud computing resources.
  18. 18 . An autonomous or semi-autonomous machine comprising: one or more central processing units (CPUs); one or more graphics processing units (GPUs); one or more hardware accelerators; and one or more external sensors having one or more fields of view or one or more sensory fields external to the autonomous or semi-autonomous machine, wherein the autonomous or semi-autonomous machine is to perform one or more planning, navigation, or control operations in accordance with a usability of sensor data as obtained using the one or more external sensors, the usability of the sensor data determined based at least on a visibility distance that is computed using one or more outputs of one or more machine learning models that process the sensor data.
  19. 19 . The autonomous or semi-autonomous machine of claim 18 , wherein the visibility distance is computed, at least, by: determining, using the one or more machine learning models that process the sensor data, the one or more outputs indicating at least a visibility distance value associated with the sensor data; and determining, based at least on the visibility distance value, the visibility distance associated with the sensor data.
  20. 20 . The autonomous or semi-autonomous machine of claim 18 , wherein the autonomous or semi-autonomous machine is further to: determine, based at least on the sensor data, an output that is associated with a distance value within an environment, wherein the usability of the sensor data to perform the one or more planning, navigation, or control operations is further determined based at least on the distance value associated with the output.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 17/449,306, filed Sep. 29, 2021, which is related to U.S. Non-Provisional application Ser. No. 16/570,187, filed on Sep. 13, 2019. Each of which is hereby incorporated by reference in its entirety. BACKGROUND Autonomous driving systems and semi-autonomous driving systems (e.g., advanced driver assistance systems (ADAS)) may leverage sensors (e.g., cameras, LiDAR sensors, RADAR sensors, etc.) to perform various tasks-such as blind spot monitoring, automatic emergency braking, lane keeping, object detection, obstacle avoidance, and localization. For example, for autonomous and ADAS systems to operate independently and efficiently, an understanding of the surrounding environment of the vehicle in real-time or near real-time may be generated. To accurately and efficiently understand the surrounding environment of the vehicle, the sensors must generate usable, unobscured sensor data (e.g., representative of images, depth maps, point clouds, etc.). However, a sensor's ability to perceive the surrounding environment may be compromised by a variety of sources-such as weather (e.g., rain, fog, snow, hail, smoke, etc.), traffic conditions, sensor blockage (e.g., from debris, moisture, etc.), or blur. As a result, the resulting sensor data may not clearly depict vehicles, obstacles, and/or other objects in the environment. Conventional systems for addressing compromised visibility distances have used feature-level approaches to detect individual pieces of visual evidence, and subsequently pieced these features together to determine that a compromised visibility exists. These conventional methods primarily rely on computer vision techniques-such as by analyzing the absence of sharp edge features (e.g., sharp changes in gradient, color, intensity) in regions of the image, using color-based pixel analysis or other low-level feature analysis to detect potential visibility issues, and/or binary support vector machine classification with a blind versus not blind output. However, such feature-based computer vision techniques require separate analysis of each feature—e.g., whether each feature is relevant to visibility or not—as well as an analysis of how to combine the different features for a specific sensor reduced visibility condition, thereby limiting the scalability of such approaches due to the complexity inherent to the large variety and diversity of conditions and occurrences that can compromise data observed using sensors in real-world situations. For example, due to the computational expense of executing these conventional approaches, they are rendered ineffective for real-time or near real-time deployment. Further, conventional systems may rely on classifying reduced sensor visibility causes-such as rain, snow, fog, glare, etc.—but may not provide an accurate indication of the usability of the sensor data. For example, identifying rain in an image may not be actionable by the system for determining whether the corresponding image—or a portion thereof—is usable for various autonomous or semi-autonomous tasks. In such an example, where rain is present, the image may be deemed unusable by conventional systems, even though the image may clearly depict the environment within 100 meters of the vehicle. As such, instead of relying on the image for one or more tasks within the visible range, the image may be mistakenly discarded and the one or more tasks may be disabled. In this way, by treating each type of compromised sensor visibility equally, less egregious or detrimental types of sensor may cause an instance of sensor data to be deemed unusable even where this determination may not be entirely accurate (e.g., an image of an environment where a light drizzle is present may be usable for one or more operations while an image of an environment with dense fog may not). SUMMARY Embodiments of the present disclosure relate to deep neural network processing for visibility distance estimation—e.g., a furthest distance from a sensor that objects or elements may be discerned—in autonomous machine applications. Systems and methods are disclosed that use one or more machine learning models—such as deep neural networks (DNNs)—to compute outputs indicative of an estimated visibility distance (e.g., in the form of a computed distance or a distance bin including a range of distances) corresponding to one or more sensors of an autonomous or semi-autonomous machine. For example, by predicting an estimated visibility distance, the reliance of the machine on associated sensor data for one or more downstream tasks-such as object detection, object tracking, obstacle avoidance, path planning, control decision, and/or the like—may be adjusted. As such, where an estimated visibility distance is low—e.g., 20 meters or less 13 the corresponding sensor data may only be relied upon for Level 0 (no automation) or Level 1 (driver assistan