Search

CN-115014383-B - Navigation system for a vehicle and method for navigating a vehicle

CN115014383BCN 115014383 BCN115014383 BCN 115014383BCN-115014383-B

Abstract

The present disclosure provides a navigation system for a vehicle and a method for navigating a vehicle. The system includes at least one processor programmed to receive a plurality of images captured from an environment of a vehicle from a camera of the vehicle, analyze the first image to identify a non-semantic road feature represented in the first image, identify a first image location of at least one point associated with the non-semantic road feature in the first image, analyze the second image to identify a representation of the non-semantic road feature in the second image, identify a second image location of at least one point associated with the non-semantic road feature in the second image, determine three-dimensional coordinates of at least one point associated with the non-semantic road feature based on a difference of the first image location and the second image location and based on motion information of the vehicle between the capture of the first image and the capture of the second image, and send the three-dimensional coordinates of the at least one point associated with the non-semantic road feature to a server for updating a road navigation model.

Inventors

  • Y. Taber
  • 1. Benshalom

Assignees

  • 御眼视觉技术有限公司

Dates

Publication Date
20260505
Application Date
20200214
Priority Date
20190214

Claims (20)

  1. 1. A navigation system for a vehicle, the system comprising: At least one processor comprising circuitry and memory, wherein the memory includes instructions that, when executed by the circuitry, cause the at least one processor to: Receiving, from a camera of the vehicle, a plurality of images captured from an environment of the vehicle; analyzing a first image of the plurality of images to identify non-semantic road features represented in the first image; Identifying a first image location in the first image of at least one point associated with the non-semantic road feature; analyzing a second image of the plurality of images to identify a representation of the non-semantic road feature in the second image; Identifying a second image location in the second image of at least one point associated with the non-semantic road feature; determining three-dimensional coordinates of at least one point associated with the non-semantic road feature based on a difference of the first image location and a second image location and based on motion information of the vehicle between the capturing of the first image and the capturing of the second image; the three-dimensional coordinates of the at least one point associated with the non-semantic road feature are sent to a server for updating a road navigation model.
  2. 2. The navigation system of claim 1, wherein the three-dimensional coordinates of the at least one point associated with the non-semantic road feature are relative to an origin corresponding to a position of the camera.
  3. 3. The navigation system of claim 1, wherein the server is configured to correlate the three-dimensional coordinates transmitted by the vehicle with three-dimensional coordinates of at least one point associated with the non-semantic feature transmitted by at least one other vehicle.
  4. 4. The navigation system of claim 1, wherein the three-dimensional coordinates are located at corners of the non-semantic road feature.
  5. 5. The navigation system of claim 1, wherein the three-dimensional coordinates are located on edges of the non-semantic road feature.
  6. 6. The navigation system of claim 1, wherein the three-dimensional coordinates are located on a surface of the non-semantic road feature.
  7. 7. The navigation system of claim 1, wherein the non-semantic road feature is a back side of a sign and the three-dimensional coordinates are on a surface of the back side of the sign.
  8. 8. The navigation system of claim 1, wherein the non-semantic road feature is a building and the three-dimensional coordinates are on a surface of the building.
  9. 9. The navigation system of claim 1, wherein the non-semantic road feature is a lighting column and the three-dimensional coordinates are on a surface of the lighting column.
  10. 10. The navigation system of claim 1, wherein the non-semantic road feature is a pothole and the three-dimensional coordinates are at edges of the pothole.
  11. 11. The navigation system of claim 1, wherein the non-semantic road feature comprises an object without an identified object type.
  12. 12. The navigation system of claim 1, wherein the non-semantic road feature comprises a pothole, a road crack, a back of a sign, a building, a lighting post, or an advertising sign.
  13. 13. A method for navigating a vehicle, the method comprising: Receiving, from a camera of the vehicle, a plurality of images captured from an environment of the vehicle; analyzing a first image of the plurality of images to identify non-semantic road features represented in the first image; Identifying a first image location in the first image of at least one point associated with the non-semantic road feature; analyzing a second image of the plurality of images to identify a representation of the non-semantic road feature in the second image; Identifying a second image location in the second image of at least one point associated with the non-semantic road feature; Determining three-dimensional coordinates of the at least one point associated with the non-semantic road feature based on a difference of the first image location and a second image location and based on motion information of the vehicle between the capturing of the first image and the capturing of the second image; the three-dimensional coordinates of the at least one point associated with the non-semantic road feature are sent to a server for updating a road navigation model.
  14. 14. The method of claim 13, wherein the three-dimensional coordinates of the at least one point associated with the non-semantic road feature are relative to an origin corresponding to a position of the camera.
  15. 15. The method of claim 13, wherein the server is configured to correlate the three-dimensional coordinates transmitted by the vehicle with three-dimensional coordinates of at least one point associated with the non-semantic feature transmitted by at least one other vehicle.
  16. 16. The method of claim 13, wherein the three-dimensional coordinates are located at corners of the non-semantic road feature.
  17. 17. The method of claim 13, wherein the three-dimensional coordinates are located on edges of the non-semantic road feature.
  18. 18. The method of claim 13, wherein the non-semantic road feature is located on a surface of the non-semantic road feature.
  19. 19. The method of claim 13, wherein the non-semantic road feature is a back side of a sign and the three-dimensional coordinates are on a surface of the back side of the sign.
  20. 20. The method of claim 13, wherein the non-semantic road feature is a building and the three-dimensional coordinates are on a surface of the building.

Description

Navigation system for a vehicle and method for navigating a vehicle The present application is a divisional application of patent application of application with application number 202080014545.1 (International application number PCT/IB 2020/000115) and the name of "system and method for vehicle navigation", 14 nd day of application of year 2020. Cross Reference to Related Applications The present application claims priority and benefit from U.S. provisional application No. 62/805,646 filed on day 2, month 14 of 2019, and U.S. provisional application No. 62/813,403 filed on day 3, month 4 of 2019. The foregoing application is incorporated by reference in its entirety. Technical Field The present disclosure relates generally to vehicle navigation. Background With the continued advancement of technology, the goal of a fully autonomous vehicle capable of navigating on a roadway is imminent. Autonomous vehicles may need to take into account a wide variety of factors and make appropriate decisions based on those factors to safely and accurately reach the intended destination. For example, an autonomous vehicle may need to process and interpret visual information (e.g., information captured from a camera), information from radar or lidar, and may also use information obtained from other sources (e.g., from a GPS device, a speed sensor, an accelerometer, a suspension sensor, etc.). Meanwhile, in order to navigate to a destination, an autonomous vehicle may also need to recognize its location within a particular road (e.g., a particular lane in a multi-lane road), navigate alongside other vehicles, avoid obstacles and pedestrians, observe traffic signals and signs, and travel from one road to another at an appropriate intersection or junction. Utilizing (harnesss) and interpreting the vast amount of information collected by an autonomous vehicle as it travels to its destination poses many design challenges. The massive amounts of data that an autonomous vehicle may need to analyze, access, and/or store (e.g., captured image data, map data, GPS data, sensor data, etc.) pose challenges that may actually limit or even adversely affect autonomous navigation. Furthermore, if an autonomous vehicle relies on conventional mapping (mapping) techniques to navigate, the massive amounts of data required to store and update the map pose a significant challenge. Disclosure of Invention Embodiments consistent with the present disclosure provide systems and methods for vehicle navigation. The disclosed embodiments may use cameras to provide vehicle navigation features. For example, consistent with the disclosed embodiments, the disclosed systems may include one, two, or more cameras that monitor the environment of the vehicle. The disclosed system may provide a navigational response based on, for example, analysis of images captured by one or more cameras. In one embodiment, a navigation system for a host vehicle may include at least one processor that may be programmed to receive one or more images captured from a host vehicle environment from a camera of the host vehicle. The at least one processor may also be programmed to analyze the one or more images to detect an indicator (indicator) of the intersection. The at least one processor may also be programmed to determine a stop position of the host vehicle relative to the detected intersection based on the output received from the at least one sensor of the host vehicle. The at least one processor may also be programmed to analyze the one or more images to determine if there are indicators of one or more other vehicles in front of the host vehicle. The at least one processor may also be programmed to send an indicator of the stopping position of the host vehicle and whether one or more other vehicles are in front of the host vehicle to the server for updating the road navigation model. In one embodiment, a computer-implemented method for a host vehicle may include receiving, from a host vehicle camera, one or more images captured from an environment of the host vehicle. The method may also include analyzing the one or more images to detect an indicator of the intersection. The method may further include determining a stop position of the vehicle relative to the detected intersection based on the output received from the at least one sensor of the host vehicle. The method may also include analyzing the one or more images to determine if there are indicators of one or more other vehicles in front of the host vehicle. The method may further include sending an indicator of the stopping position of the host vehicle and whether one or more other vehicles are in front of the host vehicle to a server for updating the road navigation model. In one embodiment, a system for updating a road navigation model of a road segment may include at least one processor programmed to receive driving information from each of a plurality of vehicles, the driving information including a stop position of