Search

US-12620241-B2 - Device for detecting driver behavior using deep learning-based object classification

US12620241B2US 12620241 B2US12620241 B2US 12620241B2US-12620241-B2

Abstract

A driver behavior detection system using deep learning-based object classification includes: a frame inputting unit for receiving image frames; a downsampling unit for downsampling resolutions of a previous frame and a current frame of the image frames; an active image producing unit for utilizing brightness values by color of the downsampled previous and current frames to produce an active image; an active region extracting unit for applying a sliding window algorithm to the produced active image to extract an active region having the biggest window value among window values; and a behavior detecting unit for applying an object classification algorithm to the extracted active region to classify and detect a driver's behavior.

Inventors

  • Dong Seog HAN
  • Jaeho SEONG
  • Youngjin YOON
  • Minwoo YOO

Assignees

  • KYUNGPOOK NATIONAL UNIVERSITY INDUSTRY-ACADEMIC COOPERATION FOUNDATION

Dates

Publication Date
20260505
Application Date
20221007
Priority Date
20211012

Claims (5)

  1. 1 . A driver behavior detection system using deep learning-based object classification, the driver behavior detection system comprising: a frame inputting processor configured to receive image frames including a previous frame and a current frame; a downsampling processor configured to downsample a resolution of each of the previous frame and the current frame of the image frames by using a smoothing filter to remove motions of a predetermined size or under; an active image producing processor configured to utilize a brightness value for each color of the previous frame and the current frame that are downsampled to produce an active image; an active region extracting processor configured to: apply a sliding window algorithm to the produced active image and slide a window on the active image along a predetermined direction, extract window values for all window regions of the active image, and extract, as an active region, a window region having a biggest window value among window values; and a behavior detecting processor configured to apply an object classification algorithm to the extracted active region to classify and detect a behavior of a driver, wherein the active image producing processor is configured to produce the active image through a following mathematical expression: Activation ⁢ Map = ( R p - R c ) 2 + ( G p - G c ) 2 + ( B p - B c ) 2 wherein the R p , G p , and B p denote R, G, and B values of the previous frame, respectively and the R c , G c , and B c denote R, G, and B values of the current frame, respectively.
  2. 2 . The driver behavior detection system according to claim 1 , wherein the frame inputting processor is configured to receive the image frames of an image capturing the driver, from at least one imaging mean selected from a camera, a vision sensor, or a motion sensor mounted in a vehicle.
  3. 3 . The driver behavior detection system according to claim 1 , wherein the behavior detecting processor is configured to input the active region extracted to a pre-learned deep learning-based object classification algorithm, to classify the behavior of the driver, and to detect the behavior of the driver that is classified.
  4. 4 . The driver behavior detection system according to claim 1 , further comprising a warning output processor configured to output at least one selected from a warning speech, a warning sound, or a warning light corresponding to the detected behavior of the driver if the behavior of the driver detected through the behavior detecting processor is one of pre-classified risky behaviors.
  5. 5 . The driver behavior detection system according to claim 4 , wherein the warning output processor is configured to change volumes of the warning speech or the warning sound and a number of flickering of the warning light according to predetermined risk levels, thereby outputting different levels of warnings according to different risk levels.

Description

BACKGROUND OF THE DISCLOSURE Field of the Disclosure The present disclosure relates to a driver behavior detection system using deep learning-based object classification, more specifically to a driver behavior detection system using deep learning-based object classification that is capable of detecting a driver's risky behaviors at a high speed, while a vehicle is being driven. Background of the Related Art A driver's behaviors such as smartphone use, smoking, food taking, and the like, while the driver is driving, fail to maintain his or her concentration while driving and thus cause car accidents. A lot of drivers know that smartphone use while driving is less risky than drunk driving or drowsy driving, but according to a survey, it is found that an accident risk level of smartphone use while driving is similar to that of drunk driving. Further, a forward attention percentage when a driver uses smartphone while driving is just 50.3%, which is 23 times higher risky than a blood alcohol content level of 0.1% while drunk driving according to study results, and therefore, the driver should notice the risk of the smartphone use while driving. If the driver takes unusual behaviors while driving, further, it is hard for him or her to quickly handle unexpected risky situations, which cause car accidents. To detect a driver's risky behaviors, as a result, there is suggested a deep learning object detection method for detecting a type and position of an object from an image. However, a deep learning object detection algorithm disadvantageously makes use of many operation resources. Actually, such a method is difficult to be applied to an embedded environment for a vehicle having limited operation resources. Therefore, there is a need to develop a technology capable of detecting an object through a small number of operation resources. A background technology of the present disclosure is disclosed in Korean Patent No. 10-2282730 (Issued on Jul. 29, 2021). SUMMARY OF THE DISCLOSURE Technical Problem Accordingly, the present disclosure has been made in view of the above-mentioned problems occurring in the related art, and it is an object of the present disclosure to provide a driver behavior detection system using deep learning-based object classification that is capable of making use of the fact that a driver does not move well while driving, sensing the driver's behavior change from the brightness change between the frames imaged in real time, and applying a deep learning-based object classification algorithm to a region where the behavior change is sensed to detect the driver's behavior. Technical Solution To accomplish the above-mentioned objects, according to the present disclosure, there is provided a driver behavior detection system using deep learning-based object classification, including: a frame inputting unit for receiving image frames; a downsampling unit for downsampling resolutions of a previous frame and a current frame of the image frames; an active image producing unit for utilizing brightness values by color of the downsampled previous and current frames to produce an active image; an active region extracting unit for applying a sliding window algorithm to the produced active image to extract an active region having the biggest window value among window values; and a behavior detecting unit for applying an object classification algorithm to the extracted active region to classify and detect a driver's behavior. According to the present disclosure, desirably, the frame inputting unit may receive, to the unit of an image frame, the driver-captured image through one imaging means selected from a camera, a vision sensor, and a motion sensor mounted in a vehicle. According to the present disclosure, desirably, the downsampling unit may perform the downsampling by means of a smoothing filter for decreasing the resolutions of the previous frame and the current frame to remove the motions of a given size or under. According to the present disclosure, desirably, the active image producing unit may produce the active image through the following mathematical expression: Activation⁢ Map=(Rp-Rc)2+(Gp-Gc)2+(Bp-Bc)2(wherein the Rp, Gp, and Bp represent the R, G, and B values of the previous frame, respectively and the Rc, Gc, and Bc represent the R, G, and B values of the current frame, respectively). According to the present disclosure, desirably, in extracting a determination value of a central region of a window having a predetermined size, the active region extracting unit may first extract determination values of all regions of the active image, while sliding the window on the active image along a set direction, and then extract, as the active region, the window region having the biggest determination value among the determination values. According to the present disclosure, desirably, the behavior detecting unit may input the active region extracted through the active region extracting unit to the pre-learn