CN-121978678-A - UWB radar SAR-SLAM positioning and mapping method and system for vision degradation environment
Abstract
The invention discloses a UWB radar SAR-SLAM positioning and mapping method and system for a vision degradation environment, which comprises the steps of S1UWB radar perception and environment SAR image construction, coherent accumulation of UWB echo signals by using a back projection algorithm and odometer pose auxiliary information to generate a high-resolution environment SAR subgraph, S2 anti-noise environment feature extraction, fusion of an OS-CFAR and an SIFT algorithm, self-adaptive screening of high scattering key points and generation of stable feature descriptors, S3 feature matching and relative pose resolving, accurate resolving of relative pose transformation between subgraphs by means of bidirectional consistency screening and an RANSAC-Umeyama algorithm, S4 odometer drift correction and SLAM positioning mapping, and accumulation drift based on relative pose correction to realize high-precision autonomous positioning and global consistent mapping in the vision degradation environment. The invention realizes high-precision autonomous positioning and mapping under complex environment.
Inventors
- XIAO YAN
- LI JI
- ZHAO JIAHANG
- GAO XIANGCHUAN
- YAO JINGLI
Assignees
- 郑州联睿电子科技有限公司
- 郑州大学
Dates
- Publication Date
- 20260505
- Application Date
- 20260311
Claims (8)
- 1. The UWB radar SAR-SLAM positioning and mapping method facing the vision degradation environment is characterized by comprising the following steps: S1, constructing UWB radar perception and environment SAR images, namely performing coherent accumulation on UWB echo signals by utilizing the penetration characteristics of UWB radar in a vision degradation environment and combining a back projection algorithm and odometer pose auxiliary information, inverting the electromagnetic scattering characteristics of the environment, and generating a high-resolution environment SAR subgraph; S2, extracting anti-noise environment features, namely aiming at inherent speckle noise of a high-resolution environment SAR subgraph, fusing constant false alarm detection OS-CFAR and scale invariant feature transform SIFT algorithm, adaptively screening high scattering key points through the OS-CFAR, generating stable feature descriptors through SIFT, and extracting robust environment features; S3, feature matching and relative pose resolving, namely introducing a geometric constraint checking mechanism in a feature matching stage, screening candidate matching pairs through bidirectional consistency, and carrying out geometric verification by utilizing a RANSAC-Umeyama algorithm to accurately resolve relative pose transformation among high-resolution environment SAR subgraphs; S4, drift correction and SLAM positioning mapping, namely correcting accumulated drift of the odometer based on the calculated relative pose, and realizing high-precision autonomous positioning and globally consistent environment mapping of the mobile robot in vision degradation environments such as smoke, dense dust, low illumination and the like.
- 2. The visual degradation environment-oriented UWB radar SAR-SLAM positioning and mapping method of claim 1, wherein the specific process of generating the high resolution environment SAR subgraph by the back projection algorithm in step S1 is as follows: s11, discretizing the SAR image into a two-dimensional grid, wherein each pixel point represents the backward scattering intensity of a local area of the environment; S12, calculating any pixel point And the first Euclidean distance between radar antenna positions at times of secondary observation The Euclidean distance The calculation formula of (2) is as follows: wherein Is the coordinates of the pixel points, Is the first Radar antenna position coordinates at the time of secondary observation; S13, according to the Euclidean distance Calculating a sampling index of the echo signal in a fast time domain, wherein the sampling index is calculated according to the following formula: wherein In order to achieve the light velocity, the light beam is, Is the sampling frequency; S14, carrying out coherent superposition on signals of all observation positions in an aperture synthesized by the UWB radar, extracting amplitude values at corresponding distance indexes in each scanning signal, accumulating the amplitude values, and generating a high-resolution environment SAR subgraph, wherein an accumulating formula is as follows: , wherein, For the total number of scans within the synthetic aperture, Is the first After traversing all measuring positions and all pixel points, the true position of the target will be highlighted due to the coherent superposition of energy, and the noise will be suppressed due to the incoherent superposition.
- 3. The visual degradation environment-oriented UWB radar SAR-SLAM positioning and mapping method according to claim 1, wherein the working parameters of the UWB radar in the step S1 are 23.328GHz sampling frequency, 7.29GHz center frequency, 1.4GHz bandwidth and 1.0V pulse amplitude.
- 4. The visual degradation environment-oriented UWB radar SAR-SLAM positioning and mapping method of claim 2, wherein the specific process of extracting the anti-noise environment features in step S2 is as follows: S21, calculating the modular square of complex data for the high-resolution environment SAR subgraph to generate a power spectrogram, and enhancing the contrast ratio of a high scattering target and a background; S22, executing an OS-CFAR detection algorithm on the power spectrogram, wherein the OS-CFAR detector extracts a group of key points for each SAR image, and the specific flow is as follows: ; Wherein, the For the set of pixels to be detected of the SAR image, Is the first The OS-CFAR detection results for individual pixels, For OS-CFAR detection operators, the brackets are input parameters, and the operator functions to perform ordered statistics constant false alarm rate detection on input pixels; Is the first Complex value of each pixel in SAR image, its square represents the first A power value of each pixel; to the first of the detection results The coordinates of the individual key points are used, Represents the first Detecting the total number of key points corresponding to the pixels; and then, executing an OS-CFAR detection algorithm on the power spectrogram, and adaptively screening key points with obvious scattering characteristics, wherein the judgment rule of the OS-CFAR detection algorithm is as follows: ; Wherein, the Is a pixel point Is used for the power value of (c) in the power supply, Is an adaptive detection threshold; Representing all pixel point sets to be detected, and covering all positions to be analyzed of the whole SAR image; , for the threshold value scaling factor, The first pixel power value in the background window The order statistics are used to determine the order of the data, Is the first After the SAR images are detected by the OS-CFAR, the obtained key point set contains all target point coordinates judged to be obviously scattered; s23, adding feature descriptors for the extracted key points by adopting a scale-invariant feature transform SIFT algorithm, and adding gradient amplitude values of pixels For reflecting intensity of change of brightness, gradient argument of pixel Reflecting the direction of the change of brightness, the gradient amplitude and the amplitude angle are the core components of the feature descriptors, and the feature descriptors are the statistical codes of the local gradient information, and the calculation formula is as follows: ; ; Wherein the method comprises the steps of Is a pixel Gray values of (2); Storing the corresponding feature descriptors of all keypoints to the respective description subsets Meanwhile, the main directions corresponding to the key points are stored for subsequent sub-image matching of the SAR in the high-resolution environment.
- 5. The visual degradation environment-oriented UWB radar SAR-SLAM positioning and mapping method of claim 4, wherein the specific process of the geometric constraint checking mechanism in step S3 is: s31, calculating the subset of the descriptions from the description set by using the Dual-Softmax matcher Medium feature descriptors And Screening out candidate matching pairs meeting the constraint of bidirectional consistency according to the similarity between the two candidate matching sets, and generating a candidate matching set The definition is as follows: ; Wherein, the For matching operator, two key point sets are input, and candidate matching pair set meeting bidirectional consistency constraint is output, each element is Representing two SAR images to be matched The first of (3) Each key point and SAR image The first of (3) The key points are the best match; S32, carrying out robust estimation on candidate matching pairs by adopting a RANSAC random sampling coincidence algorithm, randomly extracting k groups of matching pairs, calculating rigid transformation by utilizing Umeyama algorithm, and counting the maximum coincidence set meeting the residual error threshold, wherein the relationship of the rigid transformation is as follows: The formula describes the spatial position relationship between two frames of high-resolution SAR subgraphs, which coordinates the key points of the previous frame By rotating the matrix And translation vector t, mapped to corresponding coordinates of the current frame Wherein the matrix is rotated And the translation vector t is calculated as follows: Constructing covariance matrix , And Is the mass center of two sets of points, and then is paired SVD decomposition is carried out to obtain , Is a left singular vector matrix, and is further constructed while ensuring the right hand system of the rotation matrix Then Translation vector ; Rigid body transformation The calculation formula of (2) is as follows: ; ; ; Wherein, the In rigid body transformation for all matched pairs The residual error is found to be the lower, And Respectively represent matching key points And Position vector in Cartesian coordinate system, interior Point set To meet the residual error threshold Is the largest consistent set of (1); S33, based on the inner point set Applying Umeyama algorithm again to refine and solve to obtain a final relative pose measured value 。
- 6. A UWB radar SAR-SLAM positioning and mapping system facing a visual degradation environment is characterized by comprising a mapping module and a positioning module, wherein the mapping module comprises a track estimation unit, a pulse compression unit and a back projection algorithm unit, the track estimation unit adopts a wheel odometer of a mobile robot as a short baseline attitude estimation source to provide attitude information for SAR imaging, the pulse compression unit adopts a matched filtering technology to perform distance-wise compression on UWB radar echo signals, performs cross-correlation operation on received echo signals and a transmitting pulse template to maximize signal to noise ratio of the signals, and the back projection algorithm unit performs coherent accumulation on radar echo data based on a time domain radar scanning-pixel mapping imaging technology to invert electromagnetic scattering characteristics of the environment and generate a high-resolution environment SAR sub-graph.
- 7. The UWB radar SAR-SLAM positioning and mapping system for the visual degradation environment according to claim 6 is characterized in that the positioning module comprises a sub-graph rough matching unit, a feature extraction unit and a feature matching and relative pose estimation unit, wherein the sub-graph rough matching unit adopts a quick screening strategy based on geometric distance to determine potential loop candidate sub-graphs, the feature extraction unit adopts a mixed feature extraction strategy combining ordered statistics of OS-CFAR and SIFT to extract anti-noise features in the high-resolution environment SAR sub-graphs, and the feature matching and relative pose estimation unit adopts a rough matching and RANSAC-Umeyama geometric constraint checking mechanism based on Dual-Softmax to calculate relative pose transformation between the high-resolution environment SAR sub-graphs and correct accumulated drift of a odometer.
- 8. The visual degradation environment-oriented UWB radar SAR-SLAM positioning and mapping system according to claim 6, wherein the Gaussian pulse parameters transmitted in the pulse compression unit are 23.328GHz in sampling frequency, 7.29GHz in center frequency, 1.4GHz in bandwidth and 1.0V in pulse amplitude.
Description
UWB radar SAR-SLAM positioning and mapping method and system for vision degradation environment Technical Field The invention relates to the technical field of SLAM positioning and mapping, in particular to a UWB radar SAR-SLAM positioning and mapping method and system facing a vision degradation environment. Background The simultaneous localization and mapping SLAM technology is a core technology for realizing autonomous navigation and task execution of a mobile robot in an unknown environment. The existing mainstream SLAM systems mainly depend on optical sensors such as LiDAR and vision cameras. LiDAR can provide high-precision and high-density environment geometric information by virtue of an active time-of-flight ToF measurement mechanism, is less affected by illumination change, and is widely applied to various advanced SLAM systems. However, both LiDAR and vision cameras operate in the near visible spectrum and are prone to failure in environmental degradation scenarios such as smoke, dust, strong light reflection, or very low illumination. Sonar technology has certain substitution potential in low-visibility environments (especially underwater), but sound waves are easily affected by refraction effects, and in extreme environments accompanied by high-temperature gradients such as fire rescue or mine operation, the ranging accuracy can be obviously reduced. Compared with optical and acoustic sensors, the microwave radar has the natural advantage of penetrating smoke, dust, rain, snow and other bad media by virtue of longer wavelength, and has extremely strong environmental adaptability. The ultra-wideband UWB radar is used as one of microwave radars, has extremely short nanosecond pulse characteristics, is excellent in penetrating performance, has high signal-to-noise ratio (SNR), low power consumption and millimeter range resolution, and becomes an important candidate sensor for realizing robust sensing in an indoor complex environment. In recent years, the research of introducing UWB radar into a mobile robot SLAM system is gradually increased, and the method is mainly divided into three types of technical routes, wherein the first type is a scheme based on Anchor Anchor-based, namely the position of an obstacle is inferred by utilizing the ranging information and the received signal strength RSSI of a UWB sensor relative to a preset Anchor point and combining a non-line-of-sight NLOS recognition algorithm, and a 2D occupied grid map is constructed, but the method is seriously dependent on a pre-deployed infrastructure, has high deployment cost and lacks flexibility, and is difficult to acquire the whole geometric details of the environment. The second type is a point cloud scheme based on anchor-free point, which only relies on single-base UWB radar to extract environmental characteristic point cloud by a triangulation method and is used for positioning, but the method is limited by multipath effect and angular resolution limitation of UWB signals, the generated point cloud map has more noise points, micro environmental characteristics such as corners, concave-convex surfaces and the like are difficult to finely distinguish, and the robustness of the system in non-ideal environment is insufficient. The third type is a scheme based on SAR imaging, wherein virtual large aperture is synthesized by utilizing radar motion, azimuth resolution is obviously improved, high-precision imaging penetrating through a glass wall and dense smoke can be realized, and the quality of a constructed map is obviously better than that of a traditional sparse point cloud method. However, although UWB-SAR imaging is excellent in mapping, how to use the generated SAR image for high-precision self-localization remains a challenge to be solved. Existing SAR imaging algorithms generally assume that the trajectory is known or extremely sensitive to trajectory errors, and lack an effective closed loop feedback mechanism to correct for accumulated odometer drift. In addition, the inherent speckle noise of UWB-SAR images also presents a significant challenge to traditional image feature extraction and matching. In China patent document CN110849367B, an indoor positioning and navigation method based on a visual SLAM fusing UWB is disclosed, and the method comprises the steps of establishing a world coordinate system and deploying a UWB base station, carrying an RGB-D camera and a UWB receiving and transmitting device on a robot, acquiring color and depth images in a moving process, acquiring position coordinates of the robot in real time based on a UWB trilateral positioning method, extracting ORB characteristic points between adjacent images and matching, calculating camera projection pose by utilizing a epipolar geometry algorithm, constructing a point cloud map by combining pose and depth information, and assisting loop detection by UWB positioning information so as to correct cumulative errors and optimize pose and map. In the