CN-115031743-B - System and method for correlating collected information relative to public road segments
Abstract
The present disclosure provides systems and methods for correlating information collected from a plurality of vehicles relative to a common road segment. The system includes a processor programmed to receive a first set of driving information from a first vehicle including a first location indicator associated with a detected semantic road feature and a second location indicator associated with a detected non-semantic road feature, receive a second set of driving information from a second vehicle including a third location indicator associated with a detected semantic road feature and a fourth location indicator associated with a detected non-semantic road feature, associate the first set of driving information and the second set of driving information, store refined locations of the detected semantic road feature and the detected non-semantic road feature in a map, and distribute the map to one or more vehicles for navigating the one or more vehicles along a common road segment.
Inventors
- Y. Taber
- R. Cohen masraton
Assignees
- 御眼视觉技术有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20200214
- Priority Date
- 20190214
Claims (16)
- 1. A system for correlating information collected from a plurality of vehicles relative to a common road segment, the system comprising: At least one processor comprising circuitry and memory, wherein the memory includes instructions that, when executed by the circuitry, cause the at least one processor to: Receiving a first set of driving information from a first vehicle, the first set of driving information including at least a first location indicator associated with a detected semantic road feature and a second location indicator associated with a detected non-semantic road feature, the first and second location indicators having been determined based on analysis of at least one image captured by a camera of the first vehicle during driving of the first vehicle along at least a portion of the public road segment; Receiving a second set of driving information from a second vehicle, the second set of driving information including at least a third location indicator associated with the detected semantic road feature and a fourth location indicator associated with the detected non-semantic road feature, the third and fourth location indicators having been determined based on analysis of at least one image captured by a camera of the second vehicle during driving of the second vehicle along at least a portion of the public road segment; Correlating the first and second sets of driving information, wherein the correlating comprises determining a refined location of the detected semantic road feature based on the first and third location indicators associated with the detected semantic road feature, and determining a refined location of the detected non-semantic road feature based on the second and fourth location indicators associated with the detected non-semantic road feature, wherein the refined location of the detected non-semantic road feature is determined based on the second location indicator being within a threshold distance of the fourth location indicator; Storing the refined locations of the detected semantic road features and the detected non-semantic road features in a map, and The map is distributed to one or more vehicles for navigating the one or more vehicles along the common road segment.
- 2. The system of claim 1, wherein the refined locations of the detected semantic road features and the detected non-semantic road features are relative to a local coordinate system of the public road segment.
- 3. The system of claim 2, wherein the local coordinate system of the common road segment is based on a plurality of images captured by onboard cameras of the plurality of vehicles.
- 4. The system of claim 1, wherein the correlating further comprises applying a curve fitting algorithm to the first set of driving information and the second set of driving information.
- 5. The system of claim 1, wherein the semantic road feature comprises an object having an identified object type.
- 6. The system of claim 1, wherein the semantic road feature comprises a traffic light, a stop sign, a speed limit sign, a warning sign, a direction sign, or a lane marker.
- 7. The system of claim 1, wherein the non-semantic road feature comprises an object without a recognized object type.
- 8. The system of claim 1, wherein the non-semantic road feature comprises a pothole, a road crack, or an advertising sign.
- 9. A method for correlating information collected from a plurality of vehicles relative to a common road segment, the method comprising: Receiving a first set of driving information from a first vehicle, the first set of driving information including at least a first location indicator associated with a detected semantic road feature and a second location indicator associated with a detected non-semantic road feature, the first and second location indicators having been determined based on analysis of at least one image captured by a camera of the first vehicle during driving of the first vehicle along at least a portion of the public road segment; Receiving a second set of driving information from a second vehicle, the second set of driving information including at least a third location indicator associated with the detected semantic road feature and a fourth location indicator associated with the detected non-semantic road feature, the third and fourth location indicators having been determined based on analysis of at least one image captured by a camera of the second vehicle during driving of the second vehicle along at least a portion of the public road segment; Correlating the first and second sets of driving information, wherein the correlating comprises determining a refined location of the detected semantic road feature based on the first and third location indicators associated with the detected semantic road feature, and determining a refined location of the detected non-semantic road feature based on the second and fourth location indicators associated with the detected non-semantic road feature, wherein the refined location of the detected non-semantic road feature is determined based on the second location indicator being within a threshold distance of the fourth location indicator; Storing the refined locations of the detected semantic road features and the detected non-semantic road features in a map, and The map is distributed to one or more vehicles for navigating the one or more vehicles along the common road segment.
- 10. The method of claim 9, wherein the refined locations of the detected semantic road features and the detected non-semantic road features are relative to a local coordinate system of the public road segment.
- 11. The method of claim 10, wherein the local coordinate system of the common road segment is based on a plurality of images captured by onboard cameras of the plurality of vehicles.
- 12. The method of claim 9, wherein the correlating further comprises applying a curve fitting algorithm to the first set of driving information and the second set of driving information.
- 13. The method of claim 9, wherein the semantic road feature comprises an object having an identified object type.
- 14. The method of claim 9, wherein the semantic road feature comprises a traffic light, a stop sign, a speed limit sign, a warning sign, a direction sign, or a lane marker.
- 15. The method of claim 9, wherein the non-semantic road feature comprises an object without a recognized object type.
- 16. The method of claim 9, wherein the non-semantic road feature comprises a pothole, a road crack, or an advertising sign.
Description
System and method for correlating collected information relative to public road segments The present application is a divisional application of patent application of application with application number 202080014545.1 (International application number PCT/IB 2020/000115) and the name of "system and method for vehicle navigation", 14 nd day of application of year 2020. Cross Reference to Related Applications The present application claims priority and benefit from U.S. provisional application No. 62/805,646 filed on day 2, month 14 of 2019, and U.S. provisional application No. 62/813,403 filed on day 3, month 4 of 2019. The foregoing application is incorporated by reference in its entirety. Technical Field The present disclosure relates generally to vehicle navigation. Background With the continued advancement of technology, the goal of a fully autonomous vehicle capable of navigating on a roadway is imminent. Autonomous vehicles may need to take into account a wide variety of factors and make appropriate decisions based on those factors to safely and accurately reach the intended destination. For example, an autonomous vehicle may need to process and interpret visual information (e.g., information captured from a camera), information from radar or lidar, and may also use information obtained from other sources (e.g., from a GPS device, a speed sensor, an accelerometer, a suspension sensor, etc.). Meanwhile, in order to navigate to a destination, an autonomous vehicle may also need to recognize its location within a particular road (e.g., a particular lane in a multi-lane road), navigate alongside other vehicles, avoid obstacles and pedestrians, observe traffic signals and signs, and travel from one road to another at an appropriate intersection or junction. Utilizing (harnesss) and interpreting the vast amount of information collected by an autonomous vehicle as it travels to its destination poses many design challenges. The massive amounts of data that an autonomous vehicle may need to analyze, access, and/or store (e.g., captured image data, map data, GPS data, sensor data, etc.) pose challenges that may actually limit or even adversely affect autonomous navigation. Furthermore, if an autonomous vehicle relies on conventional mapping (mapping) techniques to navigate, the massive amounts of data required to store and update the map pose a significant challenge. Disclosure of Invention Embodiments consistent with the present disclosure provide systems and methods for vehicle navigation. The disclosed embodiments may use cameras to provide vehicle navigation features. For example, consistent with the disclosed embodiments, the disclosed systems may include one, two, or more cameras that monitor the environment of the vehicle. The disclosed system may provide a navigational response based on, for example, analysis of images captured by one or more cameras. In one embodiment, a navigation system for a host vehicle may include at least one processor that may be programmed to receive one or more images captured from a host vehicle environment from a camera of the host vehicle. The at least one processor may also be programmed to analyze the one or more images to detect an indicator (indicator) of the intersection. The at least one processor may also be programmed to determine a stop position of the host vehicle relative to the detected intersection based on the output received from the at least one sensor of the host vehicle. The at least one processor may also be programmed to analyze the one or more images to determine if there are indicators of one or more other vehicles in front of the host vehicle. The at least one processor may also be programmed to send an indicator of the stopping position of the host vehicle and whether one or more other vehicles are in front of the host vehicle to the server for updating the road navigation model. In one embodiment, a computer-implemented method for a host vehicle may include receiving, from a host vehicle camera, one or more images captured from an environment of the host vehicle. The method may also include analyzing the one or more images to detect an indicator of the intersection. The method may further include determining a stop position of the vehicle relative to the detected intersection based on the output received from the at least one sensor of the host vehicle. The method may also include analyzing the one or more images to determine if there are indicators of one or more other vehicles in front of the host vehicle. The method may further include sending an indicator of the stopping position of the host vehicle and whether one or more other vehicles are in front of the host vehicle to a server for updating the road navigation model. In one embodiment, a system for updating a road navigation model of a road segment may include at least one processor programmed to receive driving information from each of a plurality of vehicles, the driving information includin