JP-7857404-B2 - Device and method for assisting vehicle operation based on Situational Assessment with Indexed Risk (SAFER).
Inventors
- ヘック,ステファン
- アルパート,ベンジャミン
- マフムド,タフミダ
- ビッラー,モハマド
- ホルンシュタイン,イラン
Assignees
- ナウト,インコーポレイテッド
Dates
- Publication Date
- 20260512
- Application Date
- 20220514
- Priority Date
- 20220421
Claims (17)
- The device comprises a first sensor configured to provide a first input relating to the external environment of a vehicle, a second sensor configured to provide a second input relating to the operation of the vehicle, and a processing unit configured to receive a first input from the first sensor and a second input from the second sensor, wherein the processing unit includes a first-stage processing system and a second-stage processing system, the first-stage processing system configured to receive a first input from the first sensor, receive a second input from the second sensor, process the first input to obtain a first time series of information, and process the second input to obtain a second time series of information, the second-stage processing system comprises a neural network model configured to receive the first time series of information and the second time series of information in parallel, the neural network configured to process the first time series and the second time series to determine the probability of predictive events relating to the operation of the vehicle. The aforementioned predicted event is for a future time at least one second after the current time. The first input contains first raw data, and the first time-series information is of lower dimensionality or less complex compared to the first input .
- The apparatus according to claim 1, wherein the first time-series information indicates a first risk factor, and the second time-series information indicates a second risk factor.
- The apparatus according to claim 1, wherein the processing unit is configured to package the first time series and the second time series into a data structure for supplying to a neural network model.
- The apparatus according to claim 1, wherein the first time series shows the external state of the vehicle at each different point in time, and the second time series shows the state of the driver and/or the state of the vehicle at each different point in time.
- The apparatus according to claim 1, wherein the probability of the predicted event is the first probability of the first predicted event, the processing unit is configured to determine the second probability of the second predicted event, the first and second predicted events relate to the operation of a vehicle, and the processing unit is configured to calculate a risk score based on the first probability of the first predicted event and the second probability of the second predicted event.
- The apparatus according to claim 5, wherein the first predicted event is a collision event, the second predicted event is a non-hazardous event, and the processing unit is configured to calculate the risk score based on a first probability of a collision event and a second probability of a non-hazardous event.
- The apparatus according to claim 5, wherein the processing unit is configured to calculate the risk score by applying a first weighting to the first probability to obtain a first weighted probability, applying a second weighting to the second probability to obtain a second weighted probability, and adding the first weighted probability and the second weighted probability.
- The apparatus according to claim 5, wherein the processing unit is configured to determine a third probability of a third predicted event, and the processing unit is configured to calculate the risk score based on a first probability of the first predicted event, a second probability of the second predicted event, and a third probability of the third predicted event.
- The apparatus according to claim 8, wherein the first predicted event is a collision event, the second predicted event is a near-collision event, and the third predicted event is a non-hazardous event, and the processing unit is configured to calculate the risk score based on the first probability of the collision event, the second probability of the near-collision event, and the third probability of the non-hazardous event.
- The apparatus according to claim 8, wherein the processing unit is configured to calculate the risk score by applying a first weight to the first probability to obtain a first weighted probability, applying a second weight to the second probability to obtain a second weighted probability, applying a third weight to the third probability to obtain a third weighted probability, and adding the first weighted probability, the second weighted probability, and the third weighted probability.
- The apparatus according to claim 1, wherein the first input and the second input are acquired over the past T seconds, and the processing unit is configured to process the first input and the second input acquired over the past T seconds to determine the probability of the predicted event, where T is at least 3 seconds.
- The apparatus according to claim 1, wherein the processing unit is configured to calculate a first risk score for a first time point based on probability, and the processing unit is also configured to calculate a second risk score for a second time point and to identify the difference between the first risk score and the second risk score, the difference indicating whether the dangerous situation is escalating or fading.
- The apparatus according to claim 1, wherein the processing unit is configured to determine a risk score based on the probability of a predicted event.
- The processing unit is configured to generate a control signal based on the risk score, and this control signal is used to operate the device, the device is A speaker that emits a warning, A display or light-emitting device that provides visual signals. Haptic feedback device, The apparatus according to claim 13, comprising a collision avoidance system or a vehicle control device for a vehicle.
- The apparatus according to claim 1, wherein the first time series and the second time series each include any two or more of the following: distance to the preceding vehicle, distance to the intersection stop line, vehicle speed, time to collision, time to intersection violation, estimated braking distance, information on road conditions, information on special zones, information on the environment, information on traffic conditions, time, information on visibility conditions, information on identified objects, object position, direction of movement of objects, object speed, boundary box, vehicle operating parameters, information on the driver's state, information on the driver's history, continuous driving time, proximity to meal times, information on accident history, and voice information.
- The apparatus according to claim 1, wherein the first sensor includes a camera, a lidar, a radar, or any combination thereof configured to sense the environment outside the vehicle, and/or the second sensor includes a camera configured to view the driver of the vehicle.
- The apparatus according to claim 1, wherein the first input includes a first image, the second input includes a second image, and the first stage processing system is configured to receive the first image and the second image, process the first image to obtain first time-series information, and process the second image to obtain second time-series information.
Description
Related Application Data [0001] This application claims priority and benefits of U.S. Provisional Patent Application No. 63/285,073 filed on 1 December 2021, U.S. Patent Application No. 17/726,236 filed on 21 April 2022, and U.S. Patent Application No. 17/726,269 filed on 21 April 2022. [0002] This field relates to devices and methods for assisting vehicle operation, scoring and guaranteeing driving behavior, and more particularly to devices and methods for identifying driving and situational risks. [0003] Sensors (cameras, radar, lidar, etc.) are used in vehicles to capture images of road conditions outside the vehicle. For example, cameras can be installed on a vehicle to monitor the vehicle's travel path or to monitor vehicles around it. [0004] It is desirable to use camera images to provide collision prediction and/or intersection violation prediction. It is also desirable to warn the driver or automatically operate the vehicle in response to predicted risks such as collisions and intersection violations. Furthermore, it is desirable to provide output indicating the quality of driving, which can be used for driver training and vehicle management. Alternatively or additionally, this output can also be used to compare the actions of actual drivers with those of good drivers in similar situations. [0005] Novel techniques for determining and tracking the risk of collision and/or the risk of intersection violations are described herein. Novel techniques for providing control signals to activate warning or feedback generators to warn drivers of the risk of collision and/or intersection violations and/or to mitigate such risks are also described. [0006] The device comprises a first sensor configured to provide a first input relating to the external environment of the vehicle, a second sensor configured to provide a second input relating to the operation of the vehicle, and a processing unit configured to receive the first input from the first sensor and the second input from the second sensor, wherein the processing unit includes a first-stage processing system and a second-stage processing system, the first-stage processing system configured to receive a first input from the first sensor, receive a second input from the second sensor, process the first input to obtain a first time-series information, and process the second input to obtain a second time-series information, and the second-stage processing system comprises a neural network model configured to receive the first time-series information and the second time-series information in parallel, the neural network configured to process the first time-series and the second time-series to determine the probability of predictive events relating to the operation of the vehicle. [0007] As a non-limiting example, the first sensor may be a camera, a lidar, a radar, or any combination thereof, configured to sense characteristics of the environment outside the vehicle. The first input from the first sensor may include one or more time series. [0008] In a non-limiting example, the second sensor may be a camera, depth sensor, radar, or any combination thereof, configured to capture images and/or detect the driver's state. Alternatively or additionally, the second sensor may include one or more sensing units configured to sense one or more states of the vehicle (e.g., speed, acceleration, deceleration, braking, direction, steering angle, brake operation, wheel traction, engine state, brake pedal position, accelerator pedal position, turn signal state, etc.) (vehicle state). The second input from the second sensor may include one or more time series. [0009] Optionally, the processing unit is configured to represent different situational aspects or to combine (e.g., fuse) multiple metadata streams related to them in order to determine whether the risks are combined or combined in a manner (e.g., nonlinear, exponentially, etc.) that would result in a highly dangerous situation. For example, a driver smoking a cigarette in itself may not be very dangerous. However, if the same driver is also using a cell phone and is excessively close to a preceding vehicle, the combined risk (e.g., smoking risk + cell phone use risk + excessive proximity risk) may represent a highly dangerous situation. A risky situation may be represented by a risk value that increases nonlinearly (e.g., exponentially) due to the combination of risk factors. In some embodiments, each metadata stream may be time-series data obtained by processing raw data from one or more sensors. [0010] In some embodiments, the processing unit is configured to identify peaks (or spikes) in a combination of risks, where the peaks (or spikes) represent escalating risk situations. In some embodiments, the escalating risk situation may be represented by risk values that increase non-linearly (e.g., exponentially) due to the combination of risk factors. [0011] In some embodiments, the processing unit is configured to identif