CN-122015807-A - Multi-sensor fusion synchronous positioning mapping method and device for unmanned equipment
Abstract
The invention provides a multi-sensor fusion synchronous positioning mapping method and device for unmanned equipment, which belong to the technical field of unmanned equipment and solve the technical problem of insufficient utilization of weak texture region feature sparsity and structural scene geometric constraint in the prior art; classifying plane points, fitting straight lines to form an environment outline, deriving virtual edge points by searching intersection points among the straight lines, merging the virtual edge points with edge points obtained from curvature analysis, compiling a final edge point set, performing feature matching and point cloud registration to generate a laser odometer factor, constructing a factor graph by combining IMU motion constraint, and constructing a global map by using the pose output by the back-end optimization. The invention is applied to provide the functions of instant positioning and mapping for unmanned equipment.
Inventors
- WANG TONG
- GUO SHAONING
- GUO JIE
- CAI ZIHAO
- GAO SHAN
- CHEN LIWEI
- OUYANG MIN
- XING ZHANQIANG
Assignees
- 哈尔滨工程大学
Dates
- Publication Date
- 20260512
- Application Date
- 20260213
Claims (10)
- 1. The method for synchronously positioning and mapping the multi-sensor fusion for the unmanned equipment is characterized by comprising the following steps of: S1, performing IMU pre-integration and laser radar data preprocessing in an SLAM system to obtain laser radar point cloud data corrected by motion distortion and IMU motion constraint between adjacent frames; S2, classifying plane points based on the laser radar point cloud data corrected by motion distortion, fitting the classified points with straight lines to form an environment outline, deriving virtual edge points by searching intersection points among the straight lines, merging the virtual edge points with edge points obtained from curvature analysis, and compiling a final edge point set; And S3, performing feature matching and point cloud registration based on the final edge point set to generate a laser odometer factor, combining the IMU motion constraint construction factor graph, and constructing a global map by using the pose output by the back-end optimization.
- 2. The method for synchronous positioning mapping for multi-sensor fusion of unmanned equipment according to claim 1, wherein the process of S1 comprises: S11, pre-integrating the IMU data, and calculating the relative motion increment between adjacent laser radar frames through the translational acceleration and the attitude angular speed of the object at each moment acquired by the IMU; and S12, using a rotation matrix and a translation matrix of the object at each moment obtained by the IMU pre-integration to perform spherical linear interpolation, and completely converting the point cloud data to the starting moment or the ending moment of each frame of point cloud scanning so as to realize point cloud distortion removal.
- 3. The method for synchronous positioning mapping for multi-sensor fusion of unmanned equipment according to claim 1, wherein the process of S2 comprises: S21, dividing each laser scanning line into 6 sections, independently processing each section, arranging each section of points according to curvature descending order, selecting the first 20 points from the point with the largest curvature as corner points, marking surrounding points, and selecting the points with the curvature lower than a threshold value to obtain plane points; s22, decomposing and calculating a point cloud normal vector by using a covariance matrix, judging whether the point cloud normal vector is a ground point or not by using an included angle formula, and dividing the plane point into a ground point and a non-ground point based on the characteristics of the surface normal vector; S23, the ground points participate in virtual point generation, the ground points are separated into geometrically continuous sub-clusters in the divided independent subspaces, straight line assumptions are constructed through random sampling, inner points are screened, and the straight line with the largest inner point number is selected; S24, calculating an intersection point of any two straight lines, adding the intersection point into a virtual point set if the included angle is larger than a threshold value and the intersection point is positioned near a subspace boundary, and projecting a non-ground point to a ground plane; s25, taking intensity information of laser point clouds as auxiliary features, carrying out mean value filtering on intensity values, dividing each frame of point clouds into six sub-areas, setting intensity feature self-adaptive thresholds based on histogram median values of all the sub-areas, and selecting points with intensity larger than the thresholds and local intensity change rate larger than a minimum change rate threshold as edge points; And S26, merging the edge points obtained from curvature analysis and the edge points extracted based on the intensity information with the virtual points, and compiling a final edge point set.
- 4. The method for synchronous positioning and mapping of multi-sensor fusion for unmanned equipment according to claim 1, wherein S3 comprises: S31, carrying out point cloud registration on edge points and plane points respectively, namely finding the nearest edge line in a previous frame or a local map for each edge point of the current frame, calculating the distance from the point to a straight line, finding the nearest plane block in the previous frame and the local map for each plane point of the current frame, and calculating the distance from the point to the plane; S32, hierarchical robust cost function joint optimization: Establishing geometrical residual equations from edge point to line and from plane point to plane, and introducing motion constraint provided by IMU pre-integration as a regularization term; Introducing an M-estimator kernel function, and automatically inhibiting interference of an abnormal matching point on an optimization target by dynamically adjusting residual error weight, wherein the M-estimator kernel function comprises Huber or Cauchy; carrying out iterative solution on the nonlinear cost function by adopting a Levenberg-Marquardt algorithm, and rapidly updating pose estimation by utilizing sparsity of a Hemson matrix; S33, taking the point cloud registration result as a laser odometer factor, forming a factor graph together with the IMU factor, grouping the factors by adopting a graph optimization method, obtaining two-frame gestures from step loop detection so as to perform global gesture optimization, and constructing a global map by using the optimized gestures.
- 5. The method for synchronously positioning and mapping for multi-sensor fusion of unmanned equipment according to claim 1, wherein in S11, when the IMU data is pre-integrated, the rotation integration of the IMU data in the time window of two laser radar frames is expressed by using a quaternion, and the rotation integration formula is as follows: (1) In the formula, For the measurement value of the gyroscope, Is the zero offset of the gyroscope, Is a time interval.
- 6. The method for synchronously positioning and mapping for multi-sensor fusion of unmanned equipment according to claim 1, wherein in S12, the point cloud distortion removal adopts a spherical linear interpolation Slerp method, and the interpolation parameters of the spherical linear interpolation are as follows: (4) In the formula, The point acquisition time; interpolated quaternion The method comprises the following steps: (5) Wherein, the 、 For the time instant of the adjacent IMU, 、 Given the unit quaternion q, the corresponding rotation matrix calculation formula is: (6) Point to Point Conversion from the coordinate system of the acquisition instant to the reference instant The coordinate system is: (7) Wherein, the And Is that And The corresponding rotation matrix is used to determine the rotation of the rotor, And Is a translation vector.
- 7. The method for synchronously positioning and mapping for multi-sensor fusion of unmanned equipment according to claim 1, wherein the curvature is calculated by adopting a characteristic value method of covariance matrix in S21, and a covariance matrix calculation formula of a neighborhood point is as follows: (9) Wherein, the Is the centroid of the neighborhood points, k is the number of neighborhood points, Decomposing the covariance matrix C to obtain a characteristic value lambda 1 which is not less than lambda 2 which is not less than lambda 3; The curvature calculation formula is: (10) the larger indicates that the surface is more curved, the corresponding edge feature corner points, Smaller means flatter, corresponding to the flat feature flat spot.
- 8. The method for synchronously positioning and mapping for multi-sensor fusion of unmanned equipment according to claim 1, wherein the mean value filtering processing is performed on the intensity data in the step S25, and a mean value filtering formula with a window of 2 is as follows: (11) Wherein, the The intensity value of the original point Yun Di k, The intensity value of the i-th point after filtering; the calculation formula of the standard deviation of the intensity of the subarea is as follows: (12) Wherein, the Is the intensity average value of the sector s, Points for sector s, N is the set of point clouds within the sub-region, And The intensities at points i and j.
- 9. The method for synchronously positioning and mapping for multi-sensor fusion of unmanned equipment according to claim 1, wherein the calculation formulas of the edge point residual error and the plane point residual error in the step S32 are respectively: (23) (24) Wherein, the As the edge point of the current frame, For the reference frame edge line start point, As a line direction vector, a vector of the line direction, As a reference frame plane normal vector, Is a point on the plane.
- 10. Synchronous positioning and mapping device for multi-sensor fusion of unmanned equipment is characterized in that the device comprises: The system comprises a data preprocessing module, an IMU preprocessing module, a laser radar processing module and a motion distortion correction module, wherein the data preprocessing module is used for performing IMU pre-integration and laser radar data preprocessing in an SLAM system to obtain laser radar point cloud data corrected by motion distortion and IMU motion constraint between adjacent frames; The system comprises a motion distortion correction module, an edge point set compiling module, a motion distortion correction module and a motion distortion correction module, wherein the motion distortion correction module is used for correcting the motion distortion correction of the laser radar point cloud data, classifying plane points, fitting the classified points with straight lines to form an environment outline, deriving virtual edge points by searching intersection points among the straight lines, merging the virtual edge points with the edge points obtained from the curvature analysis, and compiling a final edge point set; and the global map construction module is used for carrying out feature matching and point cloud registration based on the final edge point set, generating a laser odometer factor, constructing a factor graph by combining the IMU motion constraint, and constructing a global map by using the pose of the rear-end optimized output.
Description
Multi-sensor fusion synchronous positioning mapping method and device for unmanned equipment Technical Field The invention belongs to the technical field of unmanned equipment, and particularly relates to an unmanned equipment with the functions of providing instant positioning and drawing construction. Background The synchronous positioning mapping (Simultaneous Localization AND MAPPING) technology is a core enabling technology for realizing autonomous operation of unmanned equipment (such as unmanned vehicles, service robots, storage robots and the like). In the field of unmanned equipment, SLAM provides instant positioning and environment map construction capability for equipment, and is an indispensable foundation for realizing upper-layer functions such as autonomous navigation, path planning, dynamic obstacle avoidance and task execution. The positioning accuracy and map quality of unmanned devices directly determine the reliability and efficiency of task execution. The core challenge of SLAM is that unmanned devices need to solve two interdependence issues, self-localization and environment mapping, in an unknown environment. Accurate positioning depends on an accurate map, and construction of the map depends on accurate positioning, so that a continuous iterative optimization process is formed. As Csorba demonstrates the convergence of the SLAM problem, SLAM technology has been developed, and SLAM systems based on different sensors have been developed, mainly including laser SLAM, vision SLAM, and multi-sensor fusion SLAM systems that fuse multiple sensors. The SLAM method based on multi-sensor fusion is to further improve the precision of attitude estimation by realizing the advantage complementation of various sensors (such as laser radar, IMU, camera and the like), so that the method has the ranging precision of laser SLAM and the abundant information quantity of vision SLAM at the same time, and meets the requirements of unmanned equipment in a complex environment. In the direction, LIO-Mapping realizes tight coupling between an IMU and a laser radar by optimizing measurement residual errors, qin and the like realize tight coupling by error state filtering, so that the Mapping efficiency is improved, lio-sam algorithm proposed by Shan and the like provides initial values for a laser radar odometer by utilizing IMU pre-integration, and utilizes a laser radar odometer result to optimize IMU deviation, so that tight coupling diagram optimization between the laser radar and the IMU is realized, and Lvi-sam proposed by TixiaoShan and XinTong enhances the adaptability of a SLAM system by utilizing laser radar-vision-inertia tight coupling processing based on factor diagrams. The existing SLAM technology (including the fusion method) still faces serious challenges when facing common structural scenes (such as a gallery, a regular arrangement shelf, an open warehouse, a corridor with insufficient light, and the like), wherein vision sensors depend on illumination, illumination conditions of many structural scenes (such as night, windowless areas and dim light) or illumination changes are severe, so that visual feature extraction is difficult or invalid, the performance of the visual SLAM is seriously affected, laser point cloud features are deficient, and in the structural scenes, the features among point cloud frames obtained by laser radar scanning are highly similar and lack of distinction (such as repeated line and plane structures), robust feature matching is difficult to perform, and pose estimation (positioning) accuracy is reduced. For unmanned equipment operating in a structured environment, a more stable, reliable SLAM solution is needed. The invention provides a multi-sensor fusion SLAM method specially designed for unmanned equipment aiming at the core pain point. The method is used for deeply fusing the laser radar and the inertial measurement unit, and aims to effectively solve the problems of positioning accuracy reduction and accumulated error drift caused by lack of features of the laser SLAM in a structural scene, and finally, the positioning accuracy and map construction quality of unmanned equipment in a complex structural environment are obviously improved, and the stability and reliability of autonomous operation of the unmanned equipment are ensured. Disclosure of Invention In view of the above, the invention aims to provide a multi-sensor fusion synchronous positioning mapping method and device for unmanned equipment, which are used for solving the technical problems of sparse features and insufficient geometric constraint utilization of a structured scene aiming at a weak texture region in the environment in the prior art. In order to achieve the above purpose, the present invention adopts the following technical scheme: The invention provides a multi-sensor fusion synchronous positioning and mapping method for unmanned equipment, which comprises the following steps: S1, perform