Search

CN-116067358-B - Multi-source data fusion map building and positioning method and system and automatic driving vehicle

CN116067358BCN 116067358 BCN116067358 BCN 116067358BCN-116067358-B

Abstract

The application discloses a multisource data fusion map building and positioning method, a multisource data fusion map building and positioning system and an automatic driving vehicle, wherein the odometer information is further obtained and positioned by acquiring point cloud data and pose information; the method comprises the steps of adding milemeter information into a map, updating the map at the front end, then constructing and adjusting weights of milemeter factors and GPS factors according to key frames and corresponding GPS data in the milemeter information to update the key frames, and finally updating the map at the rear end according to the updated key frames, namely, coupling the milemeter information according to point cloud data and pose information to obtain the milemeter information to be added into the map and positioned, so as to ensure continuous and high-frequency realization of map construction and positioning, ensure normal running of a vehicle, constructing factor diagrams according to the key frame structures and the GPS data, optimizing the factor diagrams to adjust the weights of the milemeter factors and the GPS factors, and accordingly correcting abnormal states and accumulated errors at low frequency to realize accuracy and high-precision positioning of multi-source data.

Inventors

  • WANG FAPING
  • SHAO PENGTAO
  • LI NANXING

Assignees

  • 深圳海星智驾科技有限公司

Dates

Publication Date
20260512
Application Date
20221212

Claims (8)

  1. 1. A multi-source data fusion mapping and positioning method is characterized by comprising the following steps: Acquiring point cloud data; acquiring pose information; Obtaining odometer information and positioning according to the point cloud data and the pose information; Adding the odometer information into a map, and updating the map by the front end; according to the key frames in the milemeter information, an milemeter factor is constructed; Constructing a GPS factor according to GPS data corresponding to the odometer information; adjusting weights of the odometer factor and the GPS factor to update the key frame, and Updating the map at the back end according to the updated key frame; Wherein the frequency of the back-end updates is lower than the frequency of the front-end updates; The adjusting weights of the odometer factor and the GPS factor includes: adjusting weights of the odometer factor and the GPS factor using a reinforcement learning model; the input quantity of the reinforcement learning model includes a combination of any one or more of the following three: The point cloud data corresponding to the current key frame is converted into a depth map according to the angle information; The relative transformation amount of the point cloud data and the pose information corresponding to the current key frame relative to the previous key frame, and The relative amount of transformation of the corresponding current GPS data of the current key frame and the corresponding last GPS data of the last key frame.
  2. 2. The method for multi-source data fusion mapping and positioning according to claim 1, wherein obtaining odometer information and positioning according to the point cloud data and the pose information comprises: Matching the point cloud data with corresponding historical point cloud data in the map to obtain a laser pose; matching the pose information with the laser pose to obtain a matching residual error between the pose information and the laser pose, and And when the matching residual is smaller than a preset residual threshold, positioning and outputting a running track, wherein the running track is determined according to the point cloud data and the pose information.
  3. 3. The multi-source data fusion mapping and localization method of claim 1, wherein adding the odometry information to a map, updating the map with a front end comprises: Deleting historical point cloud data in the map, which is located outside the visible range, when the visible range of the laser radar exceeds the boundary of the map, and And adding the point cloud data and the pose information into the map to update the map.
  4. 4. The multi-source data fusion mapping and localization method according to claim 1, wherein the constructing an odometer factor according to the key frame in the odometer information comprises: Selecting key frames in the odometer information according to the pose information, and And constructing an odometer factor according to the key frame, wherein the odometer factor represents a change matrix of the key frame relative to the previous frame.
  5. 5. The multi-source data fusion mapping and positioning method according to claim 1, wherein the constructing a GPS factor according to the GPS data corresponding to the odometer information comprises: searching GPS data with the time difference smaller than a time threshold value from the odometer information in a GPS buffer; And when the GPS data is a fixed solution, constructing the GPS factor.
  6. 6. The method for multi-source data fusion mapping and positioning according to claim 1, wherein updating the map at the back end according to the updated key frame comprises: Adjusting the position of the corresponding point cloud data according to the updated key frame, and And updating the map again according to the point cloud data after the position adjustment.
  7. 7. A multi-source data fusion mapping and positioning system, comprising: the point cloud acquisition module is used for acquiring point cloud data; the pose acquisition module is used for acquiring pose information; The odometer acquisition module is used for acquiring odometer information and positioning according to the point cloud data and the pose information; The front-end updating module is used for adding the odometer information into a map and updating the map by the front end; The first construction module is used for constructing an odometer factor according to the key frames in the odometer information; The second construction module is used for constructing GPS factors according to the GPS data corresponding to the odometer information; A key frame updating module for adjusting the weights of the odometer factor and the GPS factor to update the key frame, wherein the adjusting the weights of the odometer factor and the GPS factor comprises adjusting the weights of the odometer factor and the GPS factor by using a reinforcement learning model, wherein the input quantity of the reinforcement learning model comprises any one or more of a combination of a depth map obtained by converting point cloud data corresponding to a current key frame according to angle information, a relative conversion quantity of the point cloud data and pose information corresponding to the current key frame relative to a previous key frame, a relative conversion quantity of the corresponding current GPS data of the current key frame and the corresponding previous GPS data of the previous key frame, and The back-end updating module is used for updating the map by the back-end according to the updated key frames; wherein the frequency of the back-end updates is lower than the frequency of the front-end updates.
  8. 8. An autonomous vehicle, comprising: Vehicle body, and The multi-source data fusion mapping and localization system of claim 7 disposed on the vehicle body.

Description

Multi-source data fusion map building and positioning method and system and automatic driving vehicle Technical Field The application relates to the technical field of automatic driving positioning, in particular to a multi-source data fusion map building and positioning method and system and an automatic driving vehicle. Background With the continuous development of the automatic driving technology, more and more automatic driving vehicles are generated, however, the positioning technology as the core technology of the automatic driving vehicles is also a current difficulty. The current common positioning modes comprise a global satellite navigation system GNSS, a real-time differential technology RTK and an inertial navigation technology (INS, IMU inertial navigation unit), wherein the global satellite navigation system GNSS comprises a GPS global positioning system and a Beidou satellite navigation system, the real-time differential technology RTK comprises multi-frequency carrier phase difference (centimeter level) and single-frequency carrier phase difference (sub-meter level)). Various positioning modes have advantages and disadvantages, and in order to improve positioning accuracy and positioning timeliness, multiple positioning modes can be considered to be adopted simultaneously, however, the refreshing frequencies of the various positioning modes are different, the data acquisition accuracy is different, errors caused by abnormality of a single sensor can be accumulated for a long time, and therefore the final positioning error is larger. Disclosure of Invention The present application has been made to solve the above-mentioned technical problems. The embodiment of the application provides a multisource data fusion mapping and positioning method and system and an automatic driving vehicle, which solve the technical problems. According to one aspect of the application, a multi-source data fusion mapping and positioning method is provided, which comprises the steps of obtaining point cloud data, obtaining pose information, obtaining odometer information according to the point cloud data and the pose information and positioning, adding the odometer information into a map to update the map at the front end, constructing an odometer factor according to a key frame in the odometer information, constructing a GPS (global positioning system) factor according to GPS data corresponding to the odometer information, adjusting weights of the odometer factor and the GPS factor to update the key frame, and updating the map at the rear end according to the updated key frame, wherein the frequency of updating at the rear end is lower than that of updating at the front end. In an embodiment, obtaining and positioning odometer information according to the point cloud data and the pose information comprises matching the point cloud data with corresponding historical point cloud data in the map to obtain a laser pose, matching the pose information with the laser pose to obtain a matching residual error between the pose information and the laser pose, and positioning and outputting a running track when the matching residual error is smaller than a preset residual error threshold, wherein the running track is determined according to the point cloud data and the pose information. In one embodiment, the adding the odometry information to a map to update the map at the front end includes deleting historical point cloud data in the map that is outside the visible range when the visible range of the lidar exceeds the boundary of the map, and adding the point cloud data and the pose information to the map to update the map. In one embodiment, the step of constructing the odometer factor according to the key frames in the odometer information comprises the steps of selecting the key frames in the odometer information according to the pose information and constructing the odometer factor according to the key frames, wherein the odometer factor represents a change matrix of the key frames relative to the previous frame. In one embodiment, the constructing the GPS factor according to the GPS data corresponding to the odometer information comprises searching the GPS data with the time difference smaller than a time threshold value with the odometer information in a GPS buffer, and constructing the GPS factor when the GPS data is a fixed solution. In an embodiment, the adjusting the weights of the odometer factor and the GPS factor includes adjusting the weights of the odometer factor and the GPS factor using a reinforcement learning model. In an embodiment, the input quantity of the reinforcement learning model comprises any one or more of a depth map obtained by converting point cloud data corresponding to a current key frame according to angle information, a relative transformation quantity of the point cloud data corresponding to the current key frame and pose information relative to a previous key frame, and a relative transfo