Search

EP-3889825-B1 - VEHICLE LANE LINE DETECTION METHOD, VEHICLE, AND COMPUTING DEVICE

EP3889825B1EP 3889825 B1EP3889825 B1EP 3889825B1EP-3889825-B1

Inventors

  • MOU, Xiaozheng

Dates

Publication Date
20260506
Application Date
20181210

Claims (11)

  1. A method (300) for detecting a lane line to be executed by a computing device, comprising: generating (S310) an optical flow image using a series of event data from a dynamic vision sensor coupled to a vehicle, each event being triggered by movement of an object in a scenario relative to the dynamic vision sensor, wherein the generating the optical flow image in accordance with a series of event data from the dynamic vision sensor coupled to the vehicle comprises: dividing the event data within a predetermined interval into a predetermined quantity of event segments in a chronological order of the timestamps; assigning different pixel values for events in different event segments and different pixel values for the events in a chronological order of the events; and generating the optical flow image in accordance with a coordinate position and a pixel value of each event; determining (S320) an initial search region comprising a start point of the lane line in accordance with the optical flow image, wherein the initial search region includes an initial first search region including the start point of the left lane line and an initial second search region including the start point of the right lane line, wherein the positions of the start point of the left lane line and the start point of the right lane line are set in advance in accordance with a position of the dynamic vision sensor; determining (S330) a center of gravity of the initial search region, comprising a center of gravity of the initial first search region and a center of gravity of the initial second search region, wherein the determining the center of gravity of the initial search region comprises selecting pixels that meet a first predetermined condition in the initial first search region and the initial second search region; and calculating average coordinates of the selected pixels in each of the initial first search region and the initial second search region to acquire the center of gravity of each of the initial first search region and the initial second search region; determining (S340) a new search region through an offsetting operation on the center of gravity, comprising offsetting the center of gravity of the initial first search region and the center of gravity of the initial second search region respectively, and determining a new first search region and a new second search region correspondingly; determining (S350) a center of gravity of the new search region; repeating (S360) the steps of determining a new search region (S340) and determining a center of gravity (S350) of the new search region iteratively to acquire centers of gravity of a plurality of search regions; and determining (S370) the lane line in accordance with the centers of gravity of the plurality of search regions, comprises acquiring, through fitting, the left lane line according to the centers of gravity of first search regions and the right lane line according to the centers of gravity of second search regions using a least square method.
  2. The method (300) according to claim 1, wherein the dynamic vision sensor is arranged at a front end of the vehicle, wherein the method further comprises marking a position of a start point of a left lane line and a position of a start point of a right lane line in advance in accordance with a position of the dynamic vision sensor.
  3. The method (300) according to claim 2, wherein the determining (s320) the initial search region comprising the start point of the lane line in accordance with the optical flow image comprises: determining an initial first search region comprising the start point of the left lane line in accordance with the optical flow image; determining an initial second search region comprising the start point of the right lane line in accordance with the optical flow image; and determining a noise region comprising noise pixels in accordance with the optical flow image, wherein the start point of the left lane line is located at the bottom left of the optical flow image, and the start point of the right lane line is located at the bottom right of the optical flow image.
  4. The method (300) according to claim 3, wherein prior to determining the center of gravity of the initial search region, the method further comprises: calculating a proportion of the quantity of noise pixels in the noise region to the total quantity of pixels in the noise region; when the proportion is greater than a threshold, taking a lane line in a previous image frame as a lane line in a current image frame; and when the proportion is smaller than the threshold, determining the center of gravity of the initial search region.
  5. The method (300) according to claims 3 or 4, wherein the determining (S340) the new search region through the offsetting operation on the center of gravity comprises: offsetting the center of gravity of the initial first search region and the center of gravity of the initial second search region through a predetermined rule to acquire a center of the new search region respectively; and determining a new first search region and a new second search region in accordance with the center of the new search region, the predetermined rule includes offsetting the center of gravity of the initial first search region horizontally to the right and vertically upward by a certain offset value, and offsetting the center of gravity of the initial second search region horizontally to the left and vertically upward by a certain offset value.
  6. The method (300) according to claim 5, wherein the determining (S350) the center of gravity of the new search region comprises: selecting pixels that meet the first predetermined condition in the new first search region and the new second search region; and calculating average coordinates of the selected pixels in each of the new first search region and the new second search region to acquire a center of gravity of each of the new first search region and the new second search region.
  7. The method (300) according to claim 5 or 6, wherein the repeating (S360) the steps of determining the new search region and determining the center of gravity of the new search region iteratively to acquire the centers of gravity of the plurality of search regions comprises, when a new search region meets a second predetermined condition, terminating the iteration, wherein the second predetermined condition comprises that each of an upper boundary of the new first search region and an upper boundary of the new second search region is at a level not higher than a corresponding predetermined position, and the predetermined position is marked in advance in accordance with the position of the dynamic vision sensor.
  8. The method (300) according to claim 1, wherein the assigning different pixel values for the events in different event segments comprises when a timestamp corresponding to an event in an event segment is larger, a larger pixel value is assigned for the event in the event segment, and when a timestamp corresponding to an event in an event segment is smaller, a smaller pixel value is assigned for the event in the event segment.
  9. A computing device, comprising one or more processor, a memory, and one or more programs stored in the memory and executed by the one or more processors, wherein the one or more programs comprises instructions for implementing the method according to any one of claims 1 to 8.
  10. A computer-readable storage medium storing therein one or more programs, wherein the one or more programs comprises instructions, and the instructions are executed by a computing device so as to implement the method according to any one of claims 1 to 8.
  11. A vehicle, comprising the computing device according to claim 9, and a dynamic vision sensor coupled to the computing device and configured to record movement of an object in a scenario relative to the dynamic vision sensor and generate event data in accordance with an event triggered by the movement.

Description

TECHNICAL FIELD The present disclosure relates to the field of driver assistant technology, in particular to a scheme for detecting a lane line. BACKGROUND Along with the rapid development of automobile industry, the quantity of vehicles increases year by year, and there is a significant damage to the safety of life and property due to traffic accidents. As pointed out in Global Status Report on Road Safety 2013 issued by the WHO, about 1.24 millions of people die every year due to the traffic accidents all over the world, and road traffic injury is one of the top eight causes of death. In order to improve the road traffic safety, many institutions and automobile enterprises have put efforts in the research and development of an automobile safeguard system. Taking the detection of a lane line as an example, the lane line on a road is detected in a running process of a vehicle, so as to ensure that the vehicle runs in a lane, thereby to prevent the vehicle from colliding with the other vehicle when it runs over the lane. This is of great significance for driving safety. In an existing lane line detection technology, usually an original image is pretreated at first (e.g., edge detection) to acquire edge information about the image. Next, edge points of the lane line is extracted in accordance with the acquired edge information, and then a curve of the lane is fitted in accordance with the edge points of the lane line. However, the extraction of the edge points through this method leads to a relatively large computation burden, so a large quantity of computation resources need to be consumed and a detection speed may be adversely affected. In addition, in an application scenario where the lane line is detected, usually the lane line needs to be detected rapidly to help a driver and prevent the occurrence of the traffic accident. EVERDING LUKAS ET AL: "Low-latency Line Tracking Using Event-Based Dynamic Vision Sensors", FRONTIERS IN NEUROROBOTICS, vol. 12, 19 February 2018 (2018-02-19), page 13, XP055946872, DOI:10.3389/fnbot.2018.00004, discloses an algorithm for the fast detection and persistent tracking of translating lines for a biologically inspired class of optical sensors, DVS. The nature of DVS data allows to solve both tasks, detection and tracking, in a combined approach in which we first cluster events and check for linearity and then continuously grow detected lines by adding events. Hence, there is an urgent need to provide a scheme for detecting the lane line rapidly and accurately. SUMMARY An object of the present disclosure is to provide a lane line detection scheme, so as to solve, or at least relieve, at least one of the above-mentioned problems. The invention provides a method for detecting lane line, a vehicle and a computing device as defined in the appended claims. According to the embodiments of the present disclosure, the dynamic vision sensor is arranged in the vehicle, the optical flow image is generated in accordance with a series of event data from the dynamic vision sensor, and the optical flow image may carry optical flow information generated in a running process of the vehicle. Then, the search regions including the left and right lane lines are determined in the optical flow image, and the left and right lane lines are acquired, through fitting, in accordance with the search regions. In this way, it is able to determine the search regions to search for key points of the lane lines and fit the curve of the lane line, without any pretreatment such as edge detection. As a result, through the scheme for detecting the lane line in the embodiments of the present disclosure, it is able to remarkably reduce the computation burden for the lane line detection, and improve the robustness of an algorithm. BRIEF DESCRIPTION OF THE DRAWINGS In order to achieve the above and related objects, some descriptive aspects will be described in conjunction with the following description and drawings, and these aspects indicate various ways capable of practicing a principle of the present disclosure. All aspects and equivalent aspects thereof shall fall within the scope of the present disclosure. The above and other objects, features and advantages will become more apparent on the basis of the drawings in conjunction with the following description. Same reference signs represent a same component or element. Fig.1 is a schematic view showing a vehicle 100 according to one embodiment of the present disclosure;Fig.2 is a schematic view showing a computing device 200 according to one embodiment of the present disclosure;Fig.3 is a flow chart of a method 300 for detecting a lane line according to one embodiment of the present disclosure;Figs.4A to 4D are schematic views showing an optical flow image and initial search regions according to one embodiment of the present disclosure; andFig.5 is a schematic view showing search regions determined in the optical flow image according to one embodiment of the prese