US-12620213-B2 - Computer vision vehicle locating fusion system and method thereof
Abstract
A computer vision vehicle locating fusion method includes: receiving an instant driving image from a camera, extracting multiple image features from the instant driving image and the multiple image features being extracted with the pre-stored feature sets in the storage device to fuse an inertial measurement parameter and the pre-stored satellite locating coordinate corresponding to one of the pre-stored feature sets that best matches the instant driving image to generate a first candidate coordinate; using the one satellite measurement coordinate received from a satellite locating device as a second candidate coordinate; calculating a first difference between the first candidate coordinate and an estimated reference coordinate, and calculating a second difference between the second candidate coordinate and the estimated reference coordinate; and determining and outputting the first candidate coordinate or the second candidate coordinate that has the smaller difference with the estimated reference coordinate.
Inventors
- Chih-Yuan Hsu
- Te-Hsiang WANG
- YOU-SIAN LIN
Assignees
- AUTOMOTIVE RESEARCH & TESTING CENTER
Dates
- Publication Date
- 20260505
- Application Date
- 20221220
Claims (16)
- 1 . A computer vision vehicle locating fusion system comprising: a storage device storing multiple pre-stored feature sets and multiple pre-stored satellite locating coordinates respectively corresponding to the multiple pre-stored feature sets, wherein each of the pre-stored feature sets comprises multiple image features, and the multiple image features of each one of the multiple pre-stored feature sets are non-identical to the multiple image features of the other pre-stored feature sets; a camera configured to be provided in a vehicle to output an instant driving image; a satellite locating device configured to be provided in the vehicle to receive a satellite measurement coordinate of the vehicle; an inertial measurement device configured to be provided in the vehicle to output an inertial measurement parameter of the vehicle; and a processing device configured to be provided in the vehicle and connect the storage device, the camera, the inertial measurement device and the satellite locating device; wherein, an update frequency of the inertial measurement parameter of the inertial measurement device is higher than a frequency of the processing device to obtain the pre-stored satellite locating coordinates, such that after obtaining one of the pre-stored satellite locating coordinates and before obtaining a next one of pre-stored satellite locating coordinates, the processing device obtains multiple inertial measurement parameters from the inertial measurement device in sequence; the processing device extracts multiple image features from the instant driving image by image graying, filtering, edge detection, and convolutional neural network (CNN), compares the multiple image features extracted with the pre-stored feature sets in the storage device by Euclidean distance comparison to fuse the inertial measurement parameter and the pre-stored satellite locating coordinate corresponding to one of the pre-stored feature sets that best matches the instant driving image to generate a first candidate coordinate; the processing device uses the satellite measurement coordinate as a second candidate coordinate; and the processing device calculates a first difference of distance between the first candidate coordinate and an estimated reference coordinate and a second difference of distance between the second candidate coordinate and the estimated reference coordinate, to determine and output the first candidate coordinate or the second candidate coordinate that has a smaller difference of distance with the estimated reference coordinate for positioning the vehicle.
- 2 . The computer vision vehicle locating fusion system as claimed in claim 1 , wherein the processing device sets and stores a serial number of the one of the pre-stored feature sets that best matches the instant driving image as a search start point and sets a search range based on the search start point.
- 3 . The computer vision vehicle locating fusion system as claimed in claim 1 , wherein, the processing device fuses the pre-stored satellite locating coordinate Pv with the inertial measurement parameter to generate a transition coordinate Px, and generates the first candidate coordinate P 1 based on the pre-stored satellite locating coordinate Pv, the transition coordinate Px and a fusion parameter k pf , wherein, P1=Pv+k pf (Px−Pv).
- 4 . The computer vision vehicle locating fusion system as claimed in claim 1 , wherein the processing device performs a Kalman filter to generate the estimated reference coordinate.
- 5 . The computer vision vehicle locating fusion system as claimed in claim 1 , wherein the processing device generates the estimated reference coordinate Pr based on the first candidate coordinate P 1 , the satellite measurement coordinate P gps and a fusion parameter k f , wherein, Pr=P1+k f (P gps −P1).
- 6 . The computer vision vehicle locating fusion system as claimed in claim 1 , wherein, the multiple pre-stored feature sets are stored in a feature database of the storage device, and the feature database stores serial numbers respectively corresponding to the multiple pre-stored feature sets.
- 7 . The computer vision vehicle locating fusion system as claimed in claim 6 , wherein an image database of the storage device stores multiple pre-stored driving images, and the serial numbers represent shooting time sequences of the multiple pre-stored driving images.
- 8 . The computer vision vehicle locating fusion system as claimed in claim 1 , wherein the inertial measurement parameter includes an angular velocity information and an acceleration information.
- 9 . The computer vision vehicle locating fusion system as claimed in claim 1 , wherein the processing device determines the first candidate coordinate or the second candidate coordinate that has the smaller difference of distance with the estimated reference coordinate as an elected coordinate, and outputs the elected coordinate expressed as follows: P =min(∥ P 1− Pr∥,∥P 2− Pr ∥) wherein, P is the elected coordinate; P 1 is the first candidate coordinate; P 2 is the second candidate coordinate; Pr is the estimated reference coordinate; ∥P 1 −Pr∥ is a distance between the first candidate coordinate and the estimated reference coordinate; and ∥P 2 −Pr∥ is a distance between the second candidate coordinate and the estimated reference coordinate.
- 10 . A computer vision vehicle locating fusion method, performed by a processing device, comprising: receiving an instant driving image from a camera, extracting multiple image features in an instant driving image by image graying, filtering, edge detection, and convolutional neural network (CNN), and comparing the multiple image features being extracted with pre-stored feature sets in a storage device by Euclidean distance comparison to fuse an inertial measurement parameter and a pre-stored satellite locating coordinate corresponding to one of the pre-stored feature sets that best matches the instant driving image to generate a first candidate coordinate; using one satellite measurement coordinate received from a satellite locating device as a second candidate coordinate; calculating a first difference of distance between the first candidate coordinate and an estimated reference coordinate, and calculating a second difference of distance between the second candidate coordinate and the estimated reference coordinate; and determining and outputting the first candidate coordinate or the second candidate coordinate that has a smaller difference of distance with the estimated reference coordinate for positioning the vehicle; wherein an update frequency of the inertial measurement parameter of an inertial measurement device connected to the processing device is higher than a frequency of the processing device to obtain the pre-stored satellite locating coordinate, such that after obtaining the pre-stored satellite locating coordinate and before obtaining a next pre-stored satellite locating coordinate, the processing device obtains multiple inertial measurement parameters from the inertial measurement device in sequence.
- 11 . The computer vision vehicle locating fusion method as claimed in claim 10 , wherein, the processing device sets and stores a serial number of the one of the pre-stored feature sets that best matches the instant driving image as a search start point and sets a search range based on the search start point.
- 12 . The computer vision vehicle locating fusion method as claimed in claim 10 , wherein, the processing device fuses the pre-stored satellite locating coordinate Pv with the inertial measurement parameter to generate a transition coordinate Px, and generates the first candidate coordinate P 1 based on the pre-stored satellite locating coordinate Pv, the transition coordinate Px and a fusion parameter kpf, wherein, P1=Pv+k pf (Px−Pv).
- 13 . The computer vision vehicle locating fusion method as claimed in claim 10 , wherein, the processing device performs a Kalman filter to generate the estimated reference coordinate.
- 14 . The computer vision vehicle locating fusion method as claimed in claim 10 , wherein, the processing device generates the estimated reference coordinate Pr based on the first candidate coordinate P 1 , the satellite measurement coordinate P gps and a fusion parameter kf, wherein, Pr=P1+k f (P gps −P1).
- 15 . The computer vision vehicle locating fusion method as claimed in claim 10 , wherein, the inertial measurement parameter includes an angular velocity information and an acceleration information.
- 16 . The computer vision vehicle locating fusion method as claimed in claim 10 , wherein, the processing device determines the first candidate coordinate or the second candidate coordinate that has the smaller difference of distance with the estimated reference coordinate as an elected coordinate, and outputs the elected coordinate, expressed as follows: P =min(∥ P 1− Pr∥,∥P 2− Pr ∥) wherein, P is the elected coordinate; P 1 is the first candidate coordinate; P 2 is the second candidate coordinate; Pr is the estimated reference coordinate; ∥P 1 −Pr∥ is a distance between the first candidate coordinate and the estimated reference coordinate; and ∥P 2 −Pr∥ is a distance between the second candidate coordinate and the estimated reference coordinate.
Description
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a vehicle locating system and method thereof, in particular to a computer vision vehicle locating fusion system and method thereof. 2. Description of the Prior Arts Self-driving vehicle is one of the main tools of future transportation, and the locating technology of the self-driving vehicle is also a popular R&D target. The conventional locating technologies may use GPS (Global Locating System) locating device, Inertial Measurement Unit (IMU) or Light Detection and Ranging (LiDAR), etc. However, various conventional locating technologies have their own shortcomings, which may lead to questionable locating reliability. Therefore, feasible solutions are yet to be found. For example, the signal stability of the GPS locating device is often disturbed by the driving environment, such as, urban areas with many buildings and tunnels that can block GPS signals, or GPS locating devices cannot receive GPS signals properly in bad weather. The IMU locating technology has a problem of accumulated errors. Also the optical radar is easily affected by heavy rain, snow and fog, and is very costly. SUMMARY OF THE INVENTION In view of the above-mentioned problems, the present invention provides a computer vision vehicle locating fusion system and method thereof to overcome the problem of the questionable reliability of various conventional positioning technologies that are susceptible to changes in the surrounding environment. A computer vision vehicle locating fusion system comprises: a storage device storing multiple pre-stored feature sets and multiple pre-stored satellite locating coordinates respectively corresponding to the multiple pre-stored feature sets, wherein each of the pre-stored feature sets comprises multiple image features, and the multiple image features of each one of the multiple pre-stored feature sets are non-identical to the multiple image features of the other pre-stored feature sets;a camera configured to be provided in a vehicle to output an instant driving image;a satellite locating device configured to be provided in the vehicle to receive a satellite measurement coordinate of the vehicle;an inertial measurement device configured to be provided in the vehicle to output an inertial measurement parameter of the vehicle; anda processing device configured to be provided in the vehicle and connect the storage device, the camera, the inertial measurement device and the satellite locating device; wherein, the processing device extracts multiple image features from the instant driving image, compares the multiple image features extracted with the pre-stored feature sets in the storage device to fuse the inertial measurement parameter and the pre-stored satellite locating coordinate corresponding to one of the pre-stored feature sets that best matches the instant driving image to generate a first candidate coordinate; the processing device uses the satellite measurement coordinate as a second candidate coordinate; andthe processing device calculates a first difference between the first candidate coordinate and an estimated reference coordinate and a second difference between the second candidate coordinate and the estimated reference coordinate to determine and output the first candidate coordinate or the second candidate coordinate that has a smaller difference with the estimated reference coordinate. A computer vision vehicle locating fusion method, performed by a processing device, comprises: receiving an instant driving image from a camera, extracting multiple image features in the instant driving image and multiple image features being extracted with the pre-stored feature sets in a storage device to fuse an inertial measurement parameter and a pre-stored satellite locating coordinate corresponding to one of the pre-stored feature sets that best matches the instant driving image to generate a first candidate coordinate;using one satellite measurement coordinate received from a satellite locating device as a second candidate coordinate;calculating a first difference between the first candidate coordinate and an estimated reference coordinate, and calculating a second difference between the second candidate coordinate and the estimated reference coordinate; anddetermining and outputting the first candidate coordinate or the second candidate coordinate that has the smaller difference with the estimated reference coordinate. In summary, the present invention integrates multiple sensors (including the camera, the satellite locating device and the inertial measurement device). Through the cooperative operation of the sensors, the storage device and the processing device, the present invention senses the surrounding environment of the vehicle by the camera to achieve preliminary positioning, and then cooperates with the satellite locating device and the inertial measurement device for locating fusion, thereby realizing the