Search

US-20260127955-A1 - DROWSINESS DETECTION FOR VEHICLE CONTROL

US20260127955A1US 20260127955 A1US20260127955 A1US 20260127955A1US-20260127955-A1

Abstract

Systems, methods and apparatus of drowsiness detection for vehicle control. For example, a vehicle includes: a camera configured to face a driver of the vehicle and generate a sequence of images of the driver driving the vehicle; an artificial neural network configured to analyze the sequence of images and classify, based on the sequence of images, whether the driver is in a drowsy state; and an infotainment system configured to provide instructions to the driver in response to a classification by the artificial neural network that the driver is in the drowsy state.

Inventors

  • Poorna Kale
  • Robert Richard Noel Bielby

Assignees

  • LODESTAR LICENSING GROUP LLC

Dates

Publication Date
20260507
Application Date
20251231

Claims (20)

  1. 1 . A vehicle, comprising: one or more computer systems configured to: output one or more instructions to a driver in accordance with a first classification from a set of classifications, wherein the first classification is based at least in part on output data from one or more networks configured to analyze a set of images of the driver, and wherein the set of images is from one or more cameras; monitor, via the one or more cameras, for at least one driver reaction from a set of reactions in response to the one or more instructions; and determine whether the first classification is confirmed in accordance with additional output data from the one or more networks that is based at least in part on one or more subsequent images of the driver, the one or more subsequent images from the one or more cameras in accordance with the monitoring.
  2. 2 . The vehicle of claim 1 , wherein the one or more computer systems are further configured to: determine that the first classification is confirmed based at least in part on the additional output data from the one or more networks, wherein the one or more subsequent images indicate an absence of the at least one driver reaction from the set of reactions.
  3. 3 . The vehicle of claim 2 , wherein the one or more computer systems are further configured to: transfer control of the vehicle to a driver assistance system based at least in part on determining that the first classification is confirmed, wherein the driver assistance system is configured to modify a speed of the vehicle, limit the speed of the vehicle, activate emergency lighting of the vehicle, or any combination thereof.
  4. 4 . The vehicle of claim 1 , wherein the one or more computer systems are further configured to: reject the first classification based at least in part on the additional output data from the one or more networks, wherein the one or more subsequent images indicate the at least one driver reaction from the set of reactions.
  5. 5 . The vehicle of claim 1 , wherein the first classification comprises a drowsy classification.
  6. 6 . The vehicle of claim 1 , wherein the at least one driver reaction comprises movement of a viewing direction of the driver's eye, movement of a hand of the driver, a movement of the driver's head, or a movement of one or more fingers of the driver.
  7. 7 . The vehicle of claim 1 , wherein the set of reactions comprise predetermined reactions, reactions customized to the driver, or any combination thereof.
  8. 8 . The vehicle of claim 1 , wherein, to output the one or more instructions, the one or more computer systems are further configured to: output the one or more instructions via an interface of the vehicle.
  9. 9 . The vehicle of claim 1 , wherein, to output the one or more instructions, the one or more computer systems are further configured to: output the one or more instructions via audible instructions to the driver, wherein the audible instructions are output at varying volume levels.
  10. 10 . The vehicle of claim 1 , wherein the one or more networks comprise one or more neural networks that are trained using a set of training images of persons.
  11. 11 . The vehicle of claim 1 , further comprising: one or more storage devices configured to: store model data associated with the one or more networks; and store the set of images, the one or more subsequent images, or any combination thereof.
  12. 12 . The vehicle of claim 1 , wherein the one or more cameras are configured to convert sensor data into an input to the one or more networks.
  13. 13 . A system, comprising: one or more sensors configured to generate a first set of images; one or more data storage devices configured to store the first set of images; and one or more computer systems associated with a neural network, the one or more computer systems configured to: obtain, from the neural network, output data indicative of a classification that is based at least in part on the first set of images analyzed by the neural network; output instructions to a driver of a vehicle in accordance with the classification indicated by the output data; and determine whether the classification is confirmed based at least in part on additional output data obtained from the neural network, the additional output data comprising an indication of whether at least one driver reaction was captured via the one or more sensors after the instructions were output.
  14. 14 . The system of claim 13 , wherein the one or more computer systems are further configured to: confirm the classification based at least in part on the additional output data indicating an absence of at least one driver reaction after the instructions were output, wherein the first set of images are stored in the one or more data storage devices based at least in part on the classification being confirmed.
  15. 15 . The system of claim 14 , further comprising: a driver assistance system that is configured for autonomous driving, wherein the one or more computer systems are further configured to: transfer control of the vehicle to the driver assistance system in accordance with the classification being confirmed, the classification comprising a drowsy classification, wherein, in response to the control of the vehicle being transferred to the driver assistance system, the driver assistance system is configured to: modify a speed of the vehicle, limit the speed of the vehicle, activate emergency lighting of the vehicle, placing the vehicle in a location, placing the vehicle in a state, validating control signals from the driver, or any combination thereof.
  16. 16 . The system of claim 13 , wherein the one or more computer systems are further configured to: reject the classification based at least in part on the additional output data indicating that the at least one driver reaction was detected via the one or more sensors.
  17. 17 . A method, comprising: outputting one or more instructions to a driver of a vehicle in accordance with a first classification from a set of classifications, wherein the first classification is based at least in part on output data associated with a set of images of the driver, and wherein the set of images is from one or more cameras of the vehicle; monitoring, via the one or more cameras, for at least one driver reaction in response to the one or more instructions; and determining whether the first classification is confirmed in accordance with additional output data that is based at least in part on one or more subsequent images of the driver, the one or more subsequent images from the one or more cameras in accordance with the monitoring.
  18. 18 . The method of claim 17 , further comprising: determining that the first classification is confirmed based at least in part on the additional output data, wherein the one or more subsequent images indicate an absence of the at least one driver reaction.
  19. 19 . The method of claim 18 , further comprising: transferring control of the vehicle to a driver assistance system based at least in part on determining that the first classification is confirmed, wherein the driver assistance system is configured to modify a speed of the vehicle, limit the speed of the vehicle, activate emergency lighting of the vehicle, or any combination thereof.
  20. 20 . The method of claim 17 , further comprising: rejecting the first classification based at least in part on the additional output data, wherein the one or more subsequent images indicate that the at least one driver reaction was from the one or more cameras.

Description

RELATED APPLICATIONS The present application is a continuation application of U.S. patent application Ser. No. 17/231,836, filed Apr. 15, 2021, entitled “DROWSINESS DETECTION FOR VEHICLE CONTROL,”, which is a continuation application of U.S. patent application Ser. No. 16/547,136 , filed Aug. 21, 2019, issued as U.S. Pat. No. 10,993,647 on May 4, 2021, and entitled “DROWSINESS DETECTION FOR VEHICLE CONTROL,” each of which is expressly incorporated by reference in its entirety herein. FIELD OF THE TECHNOLOGY At least some embodiments disclosed herein relate to vehicles in general and more particularly, but not limited to, detection of drowsiness in vehicle drivers. BACKGROUND Recent developments in the technological area of autonomous driving allow a computing system to operate, at least under some conditions, control elements of a motor vehicle without the assistance from a human operator of the vehicle. For example, sensors (e.g., cameras and radars) can be installed on a motor vehicle to detect the conditions of the surroundings of the vehicle traveling on a roadway. A computing system installed on the vehicle analyzes the sensor inputs to identify the conditions and generate control signals or commands for the autonomous adjustments of the direction and/or speed of the vehicle, with or without any input from a human operator of the vehicle. In some arrangements, when a computing system recognizes a situation where the computing system may not be able to continue operating the vehicle in a safe manner, the computing system alerts the human operator of the vehicle and requests the human operator to take over the control of the vehicle and drive manually, instead of allowing the computing system to drive the vehicle autonomously. Autonomous driving and/or advanced driver assistance system (ADAS) typically involves artificial neural network (ANN) for the identification of events and/or objects that are captured in sensor inputs. In general, an artificial neural network (ANN) uses a network of neurons to process inputs to the network and to generate outputs from the network. For example, each neuron in the network receives a set of inputs. Some of the inputs to a neuron may be the outputs of certain neurons in the network; and some of the inputs to a neuron may be the inputs provided to the neural network. The input/output relations among the neurons in the network represent the neuron connectivity in the network. For example, each neuron can have a bias, an activation function, and a set of synaptic weights for its inputs respectively. The activation function may be in the form of a step function, a linear function, a log-sigmoid function, etc. Different neurons in the network may have different activation functions. For example, each neuron can generate a weighted sum of its inputs and its bias and then produce an output that is the function of the weighted sum, computed using the activation function of the neuron. The relations between the input(s) and the output(s) of an ANN in general are defined by an ANN model that includes the data representing the connectivity of the neurons in the network, as well as the bias, activation function, and synaptic weights of each neuron. Using a given ANN model a computing device computes the output(s) of the network from a given set of inputs to the network. For example, the inputs to an ANN network may be generated based on camera inputs; and the outputs from the ANN network may be the identification of an item, such as an event or an object. A spiking neural network (SNN) is a type of ANN that closely mimics natural neural networks. An SNN neuron produces a spike as output when the activation level of the neuron is sufficiently high. The activation level of an SNN neuron mimics the membrane potential of a natural neuron. The outputs/spikes of the SNN neurons can change the activation levels of other neurons that receive the outputs. The current activation level of an SNN neuron as a function of time is typically modeled using a differential equation and considered the state of the SNN neuron. Incoming spikes from other neurons can push the activation level of the neuron higher to reach a threshold for spiking. Once the neuron spikes, its activation level is reset. Before the spiking, the activation level of the SNN neuron can decay over time, as controlled by the differential equation. The element of time in the behavior of SNN neurons makes an SNN suitable for processing spatiotemporal data. The connectivity of SNN is often sparse, which is advantageous in reducing computational workload. In general, an ANN may be trained using a supervised method where the parameters in the ANN are adjusted to minimize or reduce the error between known outputs resulted from respective inputs and computed outputs generated from applying the inputs to the ANN. Examples of supervised learning/training methods include reinforcement learning, and learning with error correction. Alternat