Search

EP-3769507-B1 - TRAFFIC BOUNDARY MAPPING

EP3769507B1EP 3769507 B1EP3769507 B1EP 3769507B1EP-3769507-B1

Inventors

  • COX, Jonathan, Albert
  • TARANALLI, Veeresh
  • JULIAN, DAVID, JONATHAN
  • CHAKRAVARTHY, Badugu, Naveen
  • CAMPOS, MICHAEL
  • KAHN, Adam, David
  • VENKATACHALAM JAYARAMAN, Venkata, Ramanan
  • YEDLA, ARVIND

Dates

Publication Date
20260506
Application Date
20190322

Claims (8)

  1. A system, comprising a memory; and a processor coupled to the memory, wherein the processor is configured to: receive a first visual data at a first time from a camera coupled to a vehicle; identify a lane line, within the first visual data, wherein the lane line is at least one of a visible lane line, a road boundary or an inferred lane line associated with a road; determine a location of the vehicle within a map at the first time; determine a location of the lane line within the map, based at least in part on the location of the lane line within the first visual data and the location of the vehicle at the first time; select a first one or more cells of an occupancy grid based at least in part on the determined location of the lane line, wherein a plane of the occupancy grid corresponds to a plane of the road; select a second one or more cells of the occupancy grid, wherein the second one or more cells substantially surround the first one or more cells; increment a value of the first one or more cells; and decrement a value of the second one or more cells; whereby the lane line map determination comprises: receiving 3D information of the road from an independent source; and projecting the distance of the lane line into an occupancy grid based on an estimate of the camera's pose, wherein projecting considers the 3D information of the road and that the lane line has substantially the same height as the road.
  2. The system of claim 1, wherein the processor is further configured to: identify an object within the first visual data; receive a second visual data at a second time from the camera; identify the same object within the second visual data; receive a movement data at the second time, and determine a location of the object in the map based at least in part on a first location of the object in the first visual data, a second location of the object in second visual data, and the movement data.
  3. The system of claim 2, wherein the movement data comprises a signal from a wheel odometer.
  4. The system of claim 1, wherein the processor is further configured to: process the first visual data with a neural network to produce a localization data and a lane line type data; wherein the localization data corresponds to the location of the lane line within the first visual data; and wherein the lane line type data corresponds to a class of lane line.
  5. The system of claim 4, wherein the class of lane line is one of a visible lane boundary, or an intersection marking.
  6. The system of claim 4, wherein the processor is further configured to: process the first visual data with a neural network to produce an object localization data and an object type data; wherein the object localization data corresponds to the location of the object within the first visual data; and wherein the object type data corresponds to a class of object.
  7. A computer-implemented method performed by a vehicle, the method comprising the steps of: receiving a first visual data at a first time from a camera coupled to the vehicle; identifying a lane line within the first visual data, wherein the lane line is at least one of a visible lane line, a road boundary or an inferred lane line associated with a road; determining a location of the vehicle within a map at the first time; determining a location of the lane line within the map, based at least in part on the location of the lane line within the first visual data, and the location of the vehicle at the first time; selecting a first one or more cells of an occupancy grid based at least in part on the determined location of the lane line, wherein a plane of the occupancy grid corresponds to a plane of the road; selecting a second one or more cells of the occupancy grid, wherein the second one or more cells substantially surround the first one or more cells; incrementing a value of the first one or more cells; and, decrementing a value of the second one or more cells; whereby the lane line map determination comprises: receiving 3D information of the road from an independent source; and projecting the distance of the lane line into an occupancy grid based on an estimate of the camera's pose, wherein projecting considers the 3D information of the road and that the lane line has substantially the same height as the road.
  8. A computer program comprising instructions, which when executed by a computing system, causing the computing system to perform a method according to claim 7.

Description

CROSS-REFERENCE TO RELATED APPLICATION The present application claims the benefit of U.S. Provisional Patent Application No. 62/647,526 filed on 23rd of March 2018, and titled, "JOINT MAPPING OF VISUAL OBJECTS AND TRAFFIC BOUNDARIES". BACKGROUND Field Certain aspects of the present disclosure generally relate to visual perceptual systems, intelligent driving monitoring systems (IDMS), advanced driver assistance systems (ADAS), autonomous driving systems, and more particularly to systems and methods for mapping of traffic boundaries such as lane lines and road boundaries. Background A reliable map of traffic boundaries that may be seen from a camera mounted to a vehicle may benefit a number of driving related systems and devices, including IDMS, ADAS, and autonomous systems devices. For example, a mapping system may be used to determine a precise location of an autonomous vehicle or may augment a localization estimate to refine an estimate provided by GNSS. As vehicular mapping and localization systems and methods become more accurate and reliable, IDMS, ADAS, autonomous driving systems, and the like, will also become more accurate and reliable. Current methods of vehicular mapping may perform acceptably well in some driving scenarios and weather conditions, but poorly in others. For example, vision-based simultaneous localization and mapping (SLAM) techniques may enable vehicular localization in urban environments having a dense array of visual landmarks. Unfortunately, current SLAM methods may suffer in these same situations if visual objects are too cluttered or are otherwise obscured. In addition, current visual SLAM methods may perform inaccurately and unreliably in several commonly encountered real-world driving situations having a paucity of visual landmarks, such as on open highways. LiDAR systems may be employed in some mapping systems. LiDAR hardware, however, may be prohibitively expensive in comparison to stereo or monocular visual camera-based systems. LiDAR may perform poorly in adverse weather conditions, such as in rain or in extreme temperatures. A LiDAR based mapping system also may require significant computational resources to store, process and transmit the acquired data, which may not be well-suited for a crowd-sourced deployment at scale. Accordingly, aspects of the present disclosure are directed to improved systems and methods for mapping that may overcome some of the challenges associated with current SLAM systems and methods, including visual SLAM systems, LiDAR SLAM systems, and the like. In particular, certain aspects of the present disclosure may reduce the cost and improve the reliability of generating high-precision maps by enabling such maps to be generated based-on monocular vision. In turn, aspects of the present disclosure may improve many driving related applications such as IDMS, driver monitoring, ADAS, and autonomous driving systems, among others. Patent document CA2999816 A1 discloses a travel control method using a detector which detects, from an actual environment around a host vehicle, information about a lane boundary line of a lane around the host vehicle, as actual boundary line information. The control method includes generating integrated boundary line information by integrating the actual boundary line information and map boundary line information which is information about the lane boundary line of the lane included in the map information, and outputting the generated integrated boundary line information. MARCUS KONRAD ET AL: "Localization in digital maps for road course estimation using grid maps", INTELLIGENT VEHICLES SYMPOSIUM (IV), 2012 IEEE, 3 June 2012 (2012-06-03), pages 87-92, ISBN: 978-1-4673-2119-8 presents a generic grid map definition with three formulations: a laser scanner based occupancy grid, a video grid based on the Inverse Perspective Mapping and a feature grid, where lane marking features are used. SUMMARY OF THE INVENTION The present disclosure provides systems and methods for mapping of traffic boundaries. Certain mapping systems and methods improve upon the prior art by using detected lane lines, road boundaries, and the like, to update map data. The solution is provided by the features of the independent claims. Variations are as defined by the dependent claims. Certain aspects of the present disclosure provide a system. The system comprises a memory and a processor coupled to the memory, wherein the processor is configured to: receive a first visual data at a first time from a camera coupled to a vehicle; identify a traffic boundary within the first visual data; determine a location of the vehicle within a map at the first time; and determine a location of the traffic boundary within the map, based at least in part on the location of the traffic boundary within the first visual data and the location of the vehicle at the first time. Certain aspects of the present disclosure prove a method. The method generally comprises: determining