EP-4155669-B1 - MULTI-SENSOR FUSION-BASED SLAM METHOD AND SYSTEM
Inventors
- LIU, JITING
Dates
- Publication Date
- 20260513
- Application Date
- 20200528
Claims (8)
- A multi-sensor fusion-based Simultaneous Localization and Mapping, SLAM, method for a server, comprising: obtaining (S102) a plurality of sensor data regarding a surrounding environment of a moving platform, the plurality of sensor data comprising point cloud data, image data, inertial measurement unit, IMU, data, and global navigation satellite system, GNSS, data; performing (S104) hierarchical processing on the plurality of sensor data to generate a plurality of localization information, wherein one sensor data corresponds to one localization information; obtaining (S106) target localization information of the moving platform based on the plurality of localization information; generating (S108) a high-precision local map based on the target localization information; and performing (S110) a closed-loop detection operation to the high-precision local map to obtain a high-precision global map of the moving platform, wherein the step of obtaining (S102) the plurality of sensor data regarding the surrounding environment of the moving platform comprises: with a laser as a benchmark, calibrating (S1021) position relationships among a camera, an IMU, a GNSS and the laser to obtain calibration information, wherein the laser, the camera, the IMU, and the GNSS are all sensors on the moving platform; with time of the GNSS as a benchmark, synchronizing (S1022) time of the laser, time of the camera, and time of the IMU to a current time system of the GNSS; and synchronously collecting (S1023) data from the laser, the camera, the IMU and the GNSS to obtain the plurality of sensor data regarding the surrounding environment of the moving platform, wherein the point cloud data is the data collected from the laser, the image data is the data collected from the camera, the IMU data is the data collected from the IMU, and the GNSS data is the data collected from the GNSS.
- The method according to claim 1, wherein in the step of performing (S104) hierarchical processing on the plurality of sensor data, a plurality of localization information is generated, which includes initial localization information, first localization information, and second localization information, comprising: generating (S1041) the initial localization information based on the IMU data, the GNSS data, and the calibration information; generating (S1042) the first localization information by using visual SLAM on basis of the initial localization information and the image data; and generating (S1043) the second localization information by using laser SLAM on basis of the first localization information and the point cloud data.
- The method according to claim 2, wherein the step of obtaining (S106) target localization information of the moving platform based on the plurality of localization information comprises: extracting (S1061) a keyframe matching point set of the image data and a point cloud data matching point set; generating (S1062) a comprehensive localization information database based on the second localization information, the IMU data, the GNSS data, the keyframe matching point set, and the point cloud data matching point set; performing (S1063) joint optimization to data sets in the comprehensive localization information database to gain a high-precision trace of the moving platform; and using (S1064) the high-precision trace as the target localization information.
- The method according to claim 3, wherein in the step of generating (S108) a high-precision local map based on the target localization information, the high-precision local map includes a local image map and a local point cloud three-dimensional scene map, comprising: resolving (S1081) position and attitude information of the keyframe of the image data based on the high-precision trace to generate the local image map; and resolving (S1082) position and attitude information of the point cloud data based on the high-precision trace to generate the local point cloud three-dimensional scene map.
- The method according to claim 4, wherein the step of performing (S110) a closed-loop detection operation to the high-precision local map to obtain a high-precision global map of the moving platform comprises: performing (S1101) the closed-loop detection operation to the high-precision local map to obtain a local map rotation and translation matrix; constructing (S1102) an image optimization pose constraint based on the local map rotation and translation matrix; correcting (S1103) the high-precision trace by using the image optimization posture constraint to obtain a corrected high-precision trace; and obtaining (S1104) the high-precision global map of the moving platform based on the corrected high-precision trace.
- A multi-sensor fusion-based SLAM mapping system for a server, comprising an obtaining module (10), a hierarchical processing module (20), a localizing module (30), a first generation module (40), and a second generation module (50), wherein, the obtaining module (10) is configured to obtain a plurality of sensor data regarding a surrounding environment of a moving platform, the plurality of sensor data comprising point cloud data, image data, inertial measurement unit, IMU, data, and global navigation satellite system, GNSS, data; the hierarchical processing module (20) is configured to perform hierarchical processing on the plurality of sensor data to generate a plurality of localization information, wherein one sensor data corresponds to one localization information; the localizing module (30) is configured to obtain target localization information of the moving platform based on the plurality of localization information; the first generation module (40) is configured to generate a high-precision local map based on the target localization information; and the second generation module (50) is configured to perform a closed-loop detection operation on the high-precision local map to obtain a high-precision global map of the moving platform, wherein the obtaining module (10) further comprises a calibration unit (11), a synchronization unit (12), and a collection unit (13), wherein, the calibration unit (11) is configured with a laser as a benchmark to calibrate position relationships among a camera, an IMU, a GNSS, and the laser to obtain calibration information, wherein the laser, the camera, the IMU and the GNSS are all sensors on the moving platform; the synchronization unit (12) is configured with time of the GNSS as a benchmark to synchronize time of the laser, time of the camera, and time of the IMU to a current time system of the GNSS; and the collection unit (13) is configured to synchronously collect data from the laser, the camera, the IMU, and the GNSS to obtain the plurality of sensor data regarding the surrounding environment of the moving platform, wherein the point cloud data is the data collected from the laser, the image data is the data collected from the camera, the IMU data is the data collected from the IMU, and the GNSS data is the data collected from the GNSS.
- An electronic device comprising a memory, a processor and a computer program stored in the memory and performed by the processor, wherein the processor, when performing the computer program, implements steps in the method according to any one of claims 1 to 5.
- A computer-readable medium having nonvolatile program code performed by a processor, wherein the processor performs the method according to any one of claims 1 to 5 according to the program code.
Description
TECHNICAL FIELD The present invention relates to the field of navigational multi-sensor fusion technologies, and in particular, to a multi-sensor fusion-based Simultaneous Localization And Mapping (SLAM) method, system, electronic device and computer-readable medium. BACKGROUND SLAM technology is real-time localization and map construction. That is, data regarding a surrounding environment collected by sensors is processed, the current location of a moving system in an unknown environment is fed back in real time, and the map of the surrounding environment of the moving system is drawn simultaneously. This map may be a 2D planar map or a 3D map of the surrounding environment. The technology has been widely used in robotics, autonomous driving, virtual reality, mapping, agriculture, forestry, electric power, construction, among other industries. At present, common sensor units include a laser, an inertial measurement unit (IMU), a vision camera, and a global navigation satellite system (GNSS). Currently, relatively mature SLAM algorithms may be broadly classified into laser SLAM and visual SLAM. In the laser SLAM, a laser sensor is mainly used to obtain data for simultaneous localization and mapping. A laser does not depend on the lighting of a surrounding environment and can scan the high-precision 3D information of the surrounding environment. The algorithm of the laser SLAM is relatively robust. At present, as the cost of lasers decreases, laser SLAM gradually becomes one of the popular research fields in the SLAM field. However, in an environment without obvious structures, for example, an environment such as a flat wall, grassland, or a narrow corridor, lasers cannot detect effective environmental features, and as a result localization and mapping tend to fail. In the visual SLAM, a camera sensor is mainly used to obtain image data of a surrounding environment, and captured image information is used to perform localization and mapping. The visual SLAM has a low price and strong visualization and has been the most popular orientation in the field of SLAM research. However, a visual camera is very dependent on the lighting information and texture information of the surrounding environment. Once the lighting changes excessively or the texture is repetitive and monotonous, the mapping tends to fail. In both the laser SLAM and the visual SLAM, accumulated errors gradually increase with the elapse of time, resulting in reduced precision in localization and mapping. A closed-loop manner is a relatively popular at present used for correction. However, in large-scale mapping, limited by a surrounding environment, the precision often fails to meet the precision required in current map production. CN 109341706 discloses a manufacturing method of a multi-feature fusion map for an autonomous vehicle. The method comprises the following steps: collecting data of each sensor by utilizing an on-board laser radar device; generating a three-dimensional point cloud map by utilizing Inertial Measurement Unit (IMU) data and laser measurement data; generating a visual feature map by utilizing the IMU data and camera data; preprocessing GPS data, and converting a geodetic coordinates into a spatial rectangular coordinates; performing global optimization fusion by utilizing a continuous time SLAM algorithm; and generating the multi-feature fusion map. Various sensor data of a laser radar, an IMU, a camera and a GPS, etc., are integrated, so that the stability and the accuracy are greatly improved. The data are processed by fusing a visual SLAM algorithm and a laser SLAM algorithm, thereby obtaining a better composition effect than a single visual SLAM algorithm or a single laser SLAM algorithm. CN 109900265 discloses a camera assisted Beidou-based robot positioning algorithm. The algorithm includes the time-space synchronization of a Beidou satellite system, an inertial sensor and a visual sensor; the data pre-processing of the fusion positioning of the Beidou satellite system, the inertial sensor and the visual sensor; and the fusion precise positioning of the Beidou satellite system, the inertial sensor, and the visual sensor fusion. The visual sensor and the inertial sensor are mounted on a robot. CN 109556615 discloses a driving map generation method based on multi-sensor fusion cognition of automatic driving. An environment model is built for an intelligent vehicle. The driving map provides environment information such as lane information and obstacle information in a current environment for the intelligent vehicle by combining the motor ability of the vehicle with current vehicle road information and comprises devices such as vehicle-mounted sensors, vehicle-mounted environment sensing equipment and a vehicle-mounted industrial personal computer. The vehicle-mounted environment sensing equipment comprises an intelligent camera, laser radars, a GPS and an inertial navigation device. The industrial personal computer is resp