Search

US-12623691-B2 - Fail-safe corrective actions based on vision information for autonomous vehicles

US12623691B2US 12623691 B2US12623691 B2US 12623691B2US-12623691-B2

Abstract

Systems and methods for fail-safe corrective actions based on vision information for autonomous driving. An example method is implemented by a processor system included in a vehicle, with the method comprising obtaining images from image sensors positioned about the vehicle. Visibility information is determined for at least a portion of the images. Adjustment of operation of an autonomous vehicle is caused based on the visibility information.

Inventors

  • Uma Balakrishnan
  • Daniel Hunter
  • Akash Chaurasia
  • Yun-Ta TSAI
  • Akshay Vijay Phatak

Assignees

  • TESLA, INC.

Dates

Publication Date
20260512
Application Date
20230519

Claims (19)

  1. 1 . A method implemented by a processor system included in a vehicle, the method comprising: obtaining first images from first image sensors positioned at fixed locations about the vehicle, the first image sensors corresponding to machine learning models that are trained using second images obtained from second image sensors at the fixed locations about one or more second vehicles; determining visibility information comprising a first visibility value for at least a first portion of a first image and a second visibility value for at least a second portion of the first image, wherein the first image is input into a machine learning model corresponding to an image sensor that generated the first image and a forward pass through the machine learning model is computed, and wherein the machine learning model is trained to assign individual visibility values indicative of a degree of visibility loss associated with individual portions of the first images; and providing a control signal to cause adjustment of operation of at least one subsystem of the vehicle to adjust a speed of the vehicle based on the visibility information comprising the first visibility value and the second visibility value.
  2. 2 . The method of claim 1 , wherein the machine learning model is a convolutional neural network and/or a transformer network.
  3. 3 . The method of claim 1 , wherein the visibility information reflects one or more scene tags indicative of labels associated with loss of visibility.
  4. 4 . The method of claim 3 , further comprising: updating a user interface presented via a display of the vehicle to indicate a particular scene tag and a textual description of the adjustment.
  5. 5 . The method of claim 3 , wherein the scene tags comprise haze, rain, smoke, or fog.
  6. 6 . The method of claim 1 , wherein causing adjustment of operation comprises reducing the speed associated with an autonomous driving mode.
  7. 7 . The method of claim 1 , wherein the first visibility value indicates a first severity associated with a reduction in visibility for first portions of the first images and a second visibility value indicates a second severity associated with a reduction in visibility for second portions of the first images, and wherein the first visibility value and the second visibility value are selected, by the machine learning model, from a range of values.
  8. 8 . The method of claim 7 , wherein each image is separated into a plurality of portions each representing a rectangular pixel area.
  9. 9 . A system comprising one or more processors and non-transitory computer storage media including instructions that when executed by the processors cause the processors to perform operations, wherein the system is included in a vehicle, and wherein the operations comprise: obtaining first images from first image sensors positioned at fixed locations about the vehicle, the first image sensors corresponding to machine learning models that are trained using second images obtained from second image sensors at the fixed locations about one or more second vehicles; determining visibility information comprising a first visibility value for at least a first portion of a first image and a second visibility value for at least a second portion of the first image, wherein the first image is input into a machine learning model corresponding to an image sensor that generated the first image and a forward pass through the machine learning model is computed, and wherein the machine learning model is trained to assign individual visibility values indicative of a degree of visibility loss associated with individual portions of first images; and providing a control signal to cause adjustment of operation of at least one subsystem of the vehicle to adjust a speed of the vehicle based on the visibility information comprising the first visibility value and the second visibility value.
  10. 10 . The system of claim 9 , wherein the machine learning model is a convolutional neural network and/or a transformer network.
  11. 11 . The system of claim 9 , wherein the visibility information reflects one or more scene tags indicative of labels associated with loss of visibility.
  12. 12 . The system of claim 11 , wherein the operations further comprise: updating a user interface presented via a display of the vehicle to indicate a particular scene tag and a textual description of the adjustment.
  13. 13 . The system of claim 11 , wherein the scene tags comprise haze, rain, smoke, or fog.
  14. 14 . The system of claim 9 , wherein causing adjustment of operation comprises reducing the speed associated with an autonomous driving mode.
  15. 15 . The system of claim 9 , wherein the first visibility value indicates a first severity associated with a reduction in visibility for the first portions of the first images and a second visibility value indicates a second severity associated with a reduction in visibility for second portions of the first images, and wherein the first visibility value and the second visibility value are selected, by the machine learning model, from a range of values.
  16. 16 . The system of claim 15 , wherein each image is separated into a plurality of portions each representing a rectangular pixel area.
  17. 17 . A non-transitory computer storage media storing instructions that when executed by a system of one or more processors, cause the one or more processors to perform operations, wherein the system is included in a vehicle, and wherein the operations comprise: obtaining first images from first image sensors positioned at fixed locations about the vehicle, the first image sensors corresponding to machine learning models that are trained using second images obtained from second image sensors at the fixed locations about one or more second vehicles; determining visibility information comprising a first visibility value for at least a first portion of a first image and a second visibility value for at least a second portion of the first image, wherein the first image is input into a machine learning model and a forward pass through the machine learning model corresponding to an image sensor that generated the first image is computed, and wherein the machine learning model is trained to assign individual visibility values indicative of a degree of visibility loss associated with individual portions of first images; and providing a control signal to cause adjustment of operation of at least one component of the vehicle to adjust a speed of the vehicle based on the visibility information comprising the first visibility value and the second visibility value.
  18. 18 . The computer storage media of claim 17 , wherein the visibility information reflects one or more scene tags indicative of labels associated with loss of visibility, wherein causing adjustment of operation comprises updating a user interface presented via a display of the vehicle, and wherein the updated user interface indicates a particular scene tag and a textual description of the adjustment.
  19. 19 . The computer storage media of claim 17 , wherein causing adjustment of operation comprises reducing a speed associated with an autonomous driving mode.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority to U.S. Prov. App. No. 63/365,050 titled “FAIL-SAFE CORRECTIVE ACTIONS BASED ON VISION INFORMATION FOR AUTONOMOUS VEHICLES” and filed on May 20, 2022, the disclosure of which is hereby incorporated herein by reference in its entirety. This application claims priority to U.S. Prov. App. No. 63/365,078 titled “VISION-BASED MACHINE LEARNING MODEL FOR AUTONOMOUS DRIVING WITH ADJUSTABLE VIRTUAL CAMERA” and filed on May 20, 2022, the disclosure of which is hereby incorporated herein by reference in its entirety. BACKGROUND Technical Field The present disclosure relates to machine learning models, and more particularly, to machine learning models using vision information. Description of Related Art Neural networks are relied upon for disparate uses and are increasingly forming the underpinnings of technology. For example, a neural network may be leveraged to perform object classification on an image obtained via a user device (e.g., a smart phone). In this example, the neural network may represent a convolutional neural network which applies convolutional layers, pooling layers, and one or more fully-connected layers to classify objects depicted in the image. As another example, a neural network may be leveraged for translation of text between languages. For this example, the neural network may represent a recurrent-neural network. Complex neural networks are additionally being used to enable autonomous or semi-autonomous driving functionality for vehicles. For example, an unmanned aerial vehicle may leverage a neural network to, in part, enable autonomous navigation about a real-world area. In this example, the unmanned aerial vehicle may leverage sensors to detect upcoming objects and navigate around the objects. As another example, a car or truck may execute neural network(s) to autonomously or semi-autonomously navigate about a real-world area. At present, such neural networks may rely upon costly, or error-prone, sensors. Additionally, such neural networks may lack accuracy with respect to detecting and classifying objects causing deficient autonomous or semi-autonomous driving performance. BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a block diagram illustrating an example autonomous or semi-autonomous vehicle which includes a multitude of image sensors an example processor system. FIG. 2A is a block diagram illustrating the processor system determining visibility information based on received images. FIG. 2B is a block diagram illustrating examples of visibility information determined based on a received image. FIGS. 2C-2E illustrate example images labeled with grids of visibility values. FIG. 3A is a block diagram illustrating example signals/corrective actions to be used by an autonomous vehicle. FIG. 3B is a block diagram illustrating an example user interface identifying an example signal/corrective action. FIG. 4 is a flowchart an example process for determining visibility information to be used in autonomous driving. FIG. 5 is a block diagram illustrating the processor system determining visibility information using a virtual camera network. FIG. 6 is a block diagram illustrating an example vehicle which includes the vehicle processor system. Embodiments of the present disclosure and their advantages are best understood by referring to the detailed description that follows. It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein showings therein are for purposes of illustrating embodiments of the present disclosure and not for purposes of limiting the same. DETAILED DESCRIPTION Introduction This application describes techniques to monitor for, and take fail-safe actions in response to, reduced visibility of image sensors during autonomous or semi-autonomous driving of an autonomous vehicle (collectively referred to herein as autonomous driving). During operation of an autonomous vehicle, sensor information may be received, and processed, to effectuate autonomous driving. As may be appreciated, the sensors used to obtain the sensor information may have reduced visibility based on current weather (e.g., fog, snow, rain), objects blocking the sensors, and so on. Thus, to ensure safe and accurate autonomous driving this application describes techniques to reliably identify visibility issues. For example, a machine learning model (e.g., a convolutional neural network) may be used to characterize or model visibility associated with the sensor information. Based on the visibility issues, certain correction actions may be taken. For example, braking may be applied, or autonomous operation may be temporarily turned off, to enable a person to take over driving. The autonomous driving described herein may use image sensors, such as cameras, which are positioned about an autonomous vehicle. The image sensors may obtain images at a particular frame rate, or an