CN-118347510-B - Vehicle positioning method and device, vehicle and storage medium
Abstract
The invention relates to the technical field of intelligent driving and discloses a vehicle positioning method, a device, a vehicle and a storage medium, wherein the method comprises the steps of acquiring image information acquired by an image sensor and vehicle motion information and/or position information acquired by a target sensor, wherein the target sensor is a sensor except the image sensor; the method comprises the steps of obtaining a visual detection lane line according to image information, obtaining first measurement data of lane line factors based on the visual detection lane line and a map, obtaining second measurement data of corresponding factors based on vehicle motion information and/or position information, constructing a factor graph comprising the lane line factors and target factors corresponding to a target sensor based on the first measurement data and the second measurement data, and carrying out sliding window optimization on the factor graph to obtain vehicle pose information. According to the method, the multi-source data are fused by utilizing the factor graph, so that the vehicle is positioned, multiple iterations can be performed in the factor graph solving process, the historical positioning data are fully considered, and the precision of the vehicle fusion positioning can be improved.
Inventors
- YAN DONG
- SHEN ZESHU
Assignees
- 国汽智控(北京)科技有限公司
- 国汽智控(重庆)科技有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20240329
Claims (6)
- 1. A vehicle positioning method, the method comprising: Acquiring image information acquired by an image sensor and vehicle motion information and/or position information acquired by a target sensor, wherein the target sensor is a sensor except the image sensor; Acquiring a visual detection lane line according to the image information, and acquiring first measurement data of lane line factors based on the visual detection lane line and a map; Constructing a factor graph comprising lane line factors and target factors corresponding to the target sensors based on the first measurement data and the second measurement data; Sliding window optimization is carried out on the variables and factors in the factor graph, so that vehicle pose information is obtained; the first measurement data based on the visual detection lane line and the map acquisition lane line factor comprises: According to the latest acquired vehicle pose information, at least one first lane line is acquired from the map; Acquiring target first lane lines matched with the vision detection lane lines from the at least one first lane line; Acquiring a target line segment in a target first lane line corresponding to a sampling point of the visual detection lane line, and obtaining first measurement data; The objective function used for the sliding window optimization comprises a lane line factor residual, an inertia pre-integral residual, a wheel speed factor residual and a global navigation satellite system position and speed factor residual, wherein the lane line factor residual is a vertical distance from a sampling point of a visual detection lane line to a target line segment in a target first lane line, the inertia pre-integral residual is used for restraining relative pose, speed and zero bias change at adjacent moments, the wheel speed integral residual is used for restraining translation relative motion of adjacent frames based on a wheel speed measured value, the wheel speed factor residual is used for keeping the wheel speed consistent with measured data of a wheel speed sensor, and the global navigation satellite system position and speed factor is used for keeping the position and the speed consistent with the measured information of the global navigation satellite system sensor; the obtaining, from the at least one first lane line, a target first lane line that matches each of the visually detected lane lines, includes: based on lane line edge attributes, line type and color characteristics, associating the visual detection lane line with the corresponding first lane line; Acquiring a first distance between the visual detection lane line and the first lane line based on the latest acquired vehicle pose information for the associated pair of the visual detection lane line and the first lane line; compensating the first distance to the latest acquired vehicle pose information to obtain priori vehicle pose information; Projecting the visual detection lane line to the map through the prior vehicle pose information, and calculating a second distance between the visual detection lane line and the corresponding first lane line; And if the second distance is smaller than a preset threshold value, determining the first lane line corresponding to the visual detection lane line as a target first lane line matched with the visual detection lane line.
- 2. The method according to claim 1, wherein after performing sliding window optimization on the factor graph to obtain vehicle pose information, further comprising: Acquiring target information acquired by an inertial sensor at target time, wherein the target time is after the time corresponding to the vehicle pose information; And carrying out vehicle pose prediction based on the vehicle pose information and the target information to obtain prediction information of the vehicle pose information of the target time.
- 3. The method according to claim 1 or 2, wherein the vehicle motion information and/or position information collected by the target sensor includes at least one of acceleration information and angular velocity information collected by an inertial sensor, position information and first velocity information collected by a global navigation satellite system sensor, and second velocity information collected by a wheel speed sensor.
- 4. A vehicle positioning device, the device comprising: The acquisition information acquisition module is used for acquiring image information acquired by an image sensor and vehicle motion information and/or position information acquired by a target sensor, wherein the target sensor is a sensor except the image sensor; The system comprises a measurement data acquisition module, a first measurement data acquisition module and a second measurement data acquisition module, wherein the vision detection lane line is acquired according to the image information, the first measurement data is acquired based on the vision detection lane line and a map, the second measurement data is acquired based on the vehicle motion information and/or the position information, the first measurement data is acquired based on the vision detection lane line and the map, the measurement data acquisition module comprises at least one first lane line acquired from the map according to the latest acquired vehicle pose information, the first target lane line matched with each vision detection lane line is acquired from the at least one first lane line, the first measurement data is acquired from target line segments in the target first lane lines corresponding to the sampling points of the vision detection lane lines, the first measurement data is acquired from the at least one first lane line, the first target lane lines matched with each vision detection lane line are acquired based on the edge attribute, the line type and the color feature of the latest vehicle, the vision detection lane line is correlated with the corresponding first lane line, and the vision detection lane line is correlated with the latest vision detection lane line based on the vision detection lane line; A factor graph construction module for constructing a factor graph including lane line factors and target factors corresponding to the target sensors based on the first measurement data and the second measurement data; The positioning module is used for carrying out sliding window optimization on variables and factors in the factor graph to obtain vehicle pose information, an objective function used for sliding window optimization comprises a lane line factor residual error, an inertia pre-integration residual error, a wheel speed factor residual error and a global navigation satellite system position and speed factor residual error, wherein the lane line factor residual error is a vertical distance from a sampling point of a visual detection lane line to a target line segment in a target first lane line, the inertia pre-integration residual error is used for restraining relative pose, speed and zero offset change at adjacent moments, the wheel speed integration residual error is used for restraining translation relative motion of adjacent frames based on a wheel speed measured value, the wheel speed factor residual error is used for keeping the wheel speed consistent with measured data of a wheel speed sensor, and the global navigation satellite system position and speed factor is used for keeping the position and the speed consistent with the measured information of the global navigation satellite system sensor.
- 5. A vehicle, characterized by comprising: A memory and a processor in communication with each other, the memory having stored therein computer instructions, the processor executing the computer instructions to perform the vehicle locating method of any of claims 1 to 3.
- 6. A computer-readable storage medium having stored thereon computer instructions for causing a computer to perform the vehicle positioning method according to any one of claims 1 to 3.
Description
Vehicle positioning method and device, vehicle and storage medium Technical Field The invention relates to the technical field of intelligent driving, in particular to a vehicle positioning method, a vehicle positioning device, a vehicle and a storage medium. Background With the development of intelligent network-connected automobile technology, the perception of complex scenes of intelligent traffic systems represented by automatic driving passenger cars, commercial cars and taxis has changed greatly. High-precision positioning is a key problem to be solved by automatic driving, and accurate position, speed and gesture information is important for realizing automatic driving of a vehicle, and influences motion planning, decision making and motion tracking performance. To achieve high-precision positioning in various complex scenarios (e.g., urban canyons, tunnels, overpasses, etc.), it is difficult to achieve by means of a single sensor, and fusion positioning often needs to be performed by means of multi-sensor information. For example, global navigation satellite system (Global Navigation SATELLITE SYSTEM, GNSS) based methods can achieve centimeter-level accuracy in open scenes, but are not yet reliable enough under occlusion and multipath conditions. To solve the GNSS problem, methods of fusing GNSS with inertial measurement units (Inertial Measurement Unit, IMU) or odometers are proposed. Of course, there are many kinds of combined positioning, such as fusion positioning based on IMU, GNSS, lidar, image, etc. sensors and high-precision maps. However, the current fusion positioning method still has certain limitations, so that the positioning accuracy is still not high. Disclosure of Invention In view of the above, the invention provides a vehicle positioning method, a device, a vehicle and a storage medium, so as to solve the problem that the positioning accuracy of the current fusion positioning method is still low. In a first aspect, the present invention provides a vehicle positioning method, the method comprising: Acquiring image information acquired by an image sensor and vehicle motion information and/or position information acquired by a target sensor, wherein the target sensor is a sensor except the image sensor; Acquiring first measurement data of lane line factors based on the visual detection lane line and a map, and acquiring second measurement data of corresponding factors based on vehicle motion information and/or position information; constructing a factor graph comprising lane line factors and target factors corresponding to the target sensors based on the first measurement data and the second measurement data; and carrying out sliding window optimization on the factor graph to obtain vehicle pose information. In an optional implementation manner, after the sliding window optimization is performed on the factor graph to obtain the vehicle pose information, the method further includes: acquiring target information acquired by an inertial sensor at target time, wherein the target time is after the time corresponding to the vehicle pose information; and predicting the vehicle pose based on the vehicle pose information and the target information to obtain the predicted information of the vehicle pose information of the target time. In an alternative embodiment, obtaining first measurement data of lane line factors based on visually detected lane lines and a map includes: according to the latest acquired vehicle pose information, at least one first lane line is acquired from a map; Obtaining target first lane lines matched with each visual detection lane line from at least one first lane line; and acquiring a target line segment in a target first lane line corresponding to the sampling point of the visual detection lane line, and obtaining first measurement data. In an alternative embodiment, obtaining a target first lane line matching each visually detected lane line from at least one first lane line includes: Based on the edge attribute characteristics of the lane lines, associating the visual detection lane lines with the corresponding first lane lines; Aiming at the associated pair of vision detection lane lines and the first lane line, acquiring a first distance between the vision detection lane lines and the first lane line based on the latest acquired vehicle pose information; compensating the first distance to the latest acquired vehicle pose information to obtain priori vehicle pose information; Projecting the visual detection lane line to a map through priori vehicle pose information, and calculating a second distance between the visual detection lane line and a corresponding first lane line; if the second distance is smaller than the preset threshold value, determining a first lane line corresponding to the visual detection lane line as a target first lane line matched with the visual detection lane line. In an alternative embodiment, the residual of the lane line factor is the di