CN-121977532-A - Mobile robot navigation method and system based on rotation imaging radar and satellite positioning
Abstract
The invention provides a mobile robot navigation method and a system based on a rotation imaging radar and satellite positioning, which relate to the technical field of mobile robot navigation, and are characterized in that firstly, the rotation imaging radar, a satellite positioning module and an inertial measurement part carried by a mobile robot are started, and three-dimensional point cloud data, satellite positioning data and inertial measurement data of surrounding environment are synchronously acquired; extracting characteristic information and integrating to generate positioning attitude data, distinguishing static and dynamic characteristics according to three-dimensional point cloud data to generate a dynamic local environment map, carrying out dead reckoning by combining the three-dimensional point cloud data, inertia measurement data and positioning attitude data before failure when satellite positioning signals fail, updating the positioning attitude data, combining the new positioning attitude data and the dynamic local environment map with target running data to generate an initial navigation control instruction, continuously collecting data dynamic adjustment instructions, and controlling the robot to complete navigation. The invention can adapt to complex dynamic environment and realize high-precision and reliable navigation.
Inventors
- YIN JIHUI
Assignees
- 上海恩曌科技有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260324
Claims (10)
- 1. A mobile robot navigation method based on rotational imaging radar and satellite positioning, the method comprising: Starting a rotary imaging radar, a satellite positioning module and an inertial measurement component which are carried by the mobile robot, enabling the rotary imaging radar to enter an omnibearing continuous rotary scanning state to acquire three-dimensional point cloud data of the surrounding environment of the mobile robot, enabling the satellite positioning module to synchronously receive positioning signals transmitted by satellites to generate satellite positioning data, and enabling the inertial measurement component to synchronously sense the motion state of the mobile robot to generate inertial measurement data; Extracting characteristic information in the three-dimensional point cloud data, the satellite positioning data and the inertial measurement data, completing coordinate system unified conversion, establishing mutual correlation of the characteristic information, and integrating all the correlated characteristic information to generate positioning attitude data of the mobile robot; According to the three-dimensional point cloud data continuously collected by the rotary imaging radar, extracting static environment characteristic data and dynamic interference characteristic data in the surrounding environment of the mobile robot, integrating the rest static environment characteristic data after removing the dynamic interference characteristic data, distributing and arranging the integrated static environment characteristic data according to the actual space position, and generating a dynamic local environment map of the surrounding environment of the mobile robot; Identifying the transmission state of satellite positioning signals according to satellite positioning data continuously collected by a satellite positioning module, when the satellite positioning signals fail, executing dead reckoning processing according to three-dimensional point cloud data continuously collected by a rotary imaging radar and inertial measurement data continuously collected by an inertial measurement component, combining positioning posture data generated before the satellite positioning signals fail, generating dead reckoning positioning data and supplementing the dead reckoning positioning data to the positioning posture data, and replacing the satellite positioning related data which fail in the positioning posture data to form new positioning posture data; Combining the new positioning attitude data and the dynamic local environment map with the target running data of the mobile robot, executing path planning and running control processing to generate an initial navigation control instruction, continuously collecting subsequent three-dimensional point cloud data, satellite positioning data and inertial measurement data through the rotary imaging radar, the satellite positioning module and the inertial measurement component, repeating the processing steps to generate updated positioning attitude data and an updated dynamic local environment map, executing dynamic adjustment processing on the initial navigation control instruction according to the updated positioning attitude data and the updated dynamic local environment map, generating an adjusted navigation control instruction and controlling the mobile robot to run according to the adjusted navigation control instruction to finish navigation operation.
- 2. The mobile robot navigation method based on the rotation imaging radar and the satellite positioning according to claim 1, wherein the extracting the characteristic information in the three-dimensional point cloud data, the satellite positioning data and the inertial measurement data and completing the coordinate system unified conversion, establishing the correlation of the characteristic information, integrating all the characteristic information after the correlation to generate the positioning gesture data of the mobile robot comprises the following steps: acquiring three-dimensional point cloud data acquired by a rotary imaging radar, satellite positioning data generated by a satellite positioning module and inertial measurement data perceived by an inertial measurement component, and marking acquisition time of the three-dimensional point cloud data, the satellite positioning data and the inertial measurement data to form a synchronous aligned time reference of the acquisition time of the three-dimensional point cloud data, the satellite positioning data and the inertial measurement data; Extracting environment contour feature data from three-dimensional point cloud data subjected to time reference synchronous alignment, carrying out point-by-point analysis on the three-dimensional point cloud data subjected to time reference synchronous alignment, extracting edge feature points, surface feature points and vertex feature points of static objects in the surrounding environment of the mobile robot, and grouping and arranging all the extracted feature points according to the actual space positions of the corresponding static objects to form an environment contour feature point set; Extracting global position coordinate data from the satellite positioning data subjected to time reference synchronous alignment, carrying out signal analysis on the satellite positioning data subjected to time reference synchronous alignment, extracting three-dimensional position data of the mobile robot in a global space coordinate system, wherein the three-dimensional position data comprises abscissa and ordinate representing a horizontal position and elevation data representing a vertical position, and integrating and sorting the extracted three-dimensional position data to form complete global position coordinate information; Extracting attitude change characteristic data from inertial measurement data after time reference synchronization alignment, analyzing the motion state of the inertial measurement data after the time reference synchronization alignment, extracting pitch angle change data, roll angle change data and course angle change data in the motion process of the mobile robot, recording continuous change tracks of each angle change data, and integrating three types of angle change data and corresponding tracks; defining a unified space coordinate system based on a space reference system corresponding to the global position coordinate information, wherein the establishment of the space coordinate system comprises the steps of determining an origin position, coordinate axis direction and length units of the space coordinate system; Comparing the local space coordinate system parameters corresponding to the environment contour feature point set with the unified space coordinate system parameters, calculating origin deviation and coordinate axis direction deviation between the local space coordinate system and the unified space coordinate system, constructing a coordinate conversion relation according to the calculated deviation values, and substituting all feature point coordinates in the environment contour feature point set into the coordinate conversion relation to finish conversion; Comparing inertial coordinate system parameters corresponding to the attitude change characteristic data with uniform space coordinate system parameters, calculating origin deviation and coordinate axis direction deviation between the inertial coordinate system and the uniform space coordinate system, constructing another group of coordinate conversion relations according to the calculated deviation values, and substituting all angle change data in the attitude change characteristic data into the coordinate conversion relations to finish conversion; Carrying out association butt joint on the environment contour feature point set after coordinate conversion and global position coordinate information, and corresponding each feature point in the environment contour feature point set to a specific coordinate point in the global position coordinate information to form a one-to-one correspondence relation between the feature points and the coordinate points and generate feature coordinate association data; carrying out association butt joint on the gesture change characteristic data after completing coordinate conversion and characteristic coordinate association data, and carrying out deviation correction on coordinate points in the characteristic coordinate association data by combining angle change data and continuous change tracks in the gesture change characteristic data to form association correction data; Based on a preset redundant data filtering rule, removing feature point data with identical coordinates in the associated correction data, removing angle change data exceeding a sensor range or a change rate exceeding a preset threshold, and integrating and sorting the reserved core data to generate positioning posture data of the mobile robot.
- 3. The mobile robot navigation method based on the rotation imaging radar and the satellite positioning according to claim 1, wherein the method is characterized in that the transmission state of the satellite positioning signal is identified according to the satellite positioning data continuously collected by the satellite positioning module, when the satellite positioning signal fails, dead reckoning processing is executed according to the three-dimensional point cloud data continuously collected by the rotation imaging radar and the inertial measurement data continuously collected by the inertial measurement unit in combination with the positioning posture data generated before the satellite positioning signal fails, dead reckoning positioning data is generated and supplemented into the positioning posture data, and new positioning posture data is formed by replacing the dead satellite positioning related data in the positioning posture data, and the method comprises the following steps: Acquiring satellite positioning data continuously acquired by a satellite positioning module, extracting signal characteristics of the satellite positioning data, extracting signal transmission intensity data, signal transmission stability data and signal transmission continuity data in the satellite positioning data, and integrating and finishing to form satellite positioning signal characteristic data; Processing satellite positioning signal characteristic data based on a preset signal quality evaluation algorithm, wherein the signal quality evaluation algorithm outputs a satellite positioning signal transmission state judgment result at least based on the deviation degree of a historical mean value and a current value of signal transmission intensity, the variance of signal transmission stability and the interruption duration and frequency of signal transmission continuity; When the satellite positioning signal is judged to be in a failure state, positioning posture data generated before the satellite positioning signal fails are obtained, position coordinate data before failure and posture change data before failure are extracted from the positioning posture data before the satellite positioning signal fails, and the position coordinate data before failure and the posture change data before failure are integrated and tidied to form dead reckoning reference data; Three-dimensional point cloud data continuously collected by the rotary imaging radar after satellite positioning signals fail are obtained, displacement characteristic extraction is carried out on the three-dimensional point cloud data, the space position variable quantity and the driving direction variable track of the mobile robot in a continuous scanning period of the rotary imaging radar are extracted, and the three-dimensional point cloud data are integrated and tidied to form relative displacement characteristic data; Acquiring inertial measurement data continuously acquired by an inertial measurement component after satellite positioning signals fail, extracting attitude increment of the inertial measurement data, extracting pitch angle increment, roll angle increment and course angle increment of the mobile robot in a continuous scanning period, recording a numerical sequence of each angle increment along with time variation, and integrating various angle increments and corresponding time sequences thereof; Substituting the space position variation in the relative displacement characteristic data into a coordinate conversion rule of a unified space coordinate system, converting the space position variation into displacement increment data with the same coordinate reference as the position coordinate data before failure, and correlating the converted displacement increment data with a running direction variation track to form displacement increment data under the unified coordinate system; The method comprises the steps of taking position coordinate data before failure in dead reckoning reference data as initial coordinates, sequentially superposing displacement increment data under a unified coordinate system in each scanning period, calculating instantaneous position coordinates of the mobile robot after each scanning period is finished, and continuously arranging all calculated instantaneous position coordinates according to the sequence of the scanning periods; Carrying out posture correction on each arranged instantaneous position coordinate by combining the extracted posture change incremental data, and adjusting the posture parameters of the corresponding instantaneous position coordinate according to the posture change incremental data in each scanning period to enable each instantaneous position coordinate to be accurately matched with the corresponding posture parameters; extracting real-time position coordinates and real-time gesture data of the mobile robot at the current moment from the matched instantaneous position coordinates and corresponding gesture parameters, and integrating and tidying the extracted real-time position coordinates and real-time gesture data to generate dead reckoning positioning data; substituting dead reckoning positioning data into positioning posture data after the satellite positioning signal is invalid, replacing position coordinate data and posture data related to the invalid satellite positioning, retaining other effective environment contour feature data and motion state data in the positioning posture data, and performing complete association integration on the positioning posture data after the data replacement to form new positioning posture data.
- 4. The mobile robot navigation method based on the rotation imaging radar and the satellite positioning according to claim 1, wherein the steps of extracting static environment feature data and dynamic interference feature data in the surrounding environment of the mobile robot according to the three-dimensional point cloud data continuously collected by the rotation imaging radar, integrating the rest static environment feature data after removing the dynamic interference feature data, and generating a dynamic local environment map of the surrounding environment of the mobile robot according to the static environment feature data after being integrated according to the actual spatial position distribution, include: Acquiring three-dimensional point cloud data continuously acquired by a rotary imaging radar, arranging the three-dimensional point cloud data according to the sequence of scanning time to form an ordered three-dimensional point cloud data sequence, performing signal filtering processing on the three-dimensional point cloud data sequence, extracting characteristic data of the three-dimensional point cloud data sequence after signal filtering, analyzing each point cloud data in the three-dimensional point cloud data sequence point by point, extracting point cloud coordinate data and point cloud density data reflecting the characteristics of an environmental object, and integrating and finishing the extracted point cloud coordinate data and the point cloud density data to form environmental characteristic original data; Based on registration and comparison results of continuous multi-frame point cloud data, carrying out static and dynamic classification on the original environmental characteristic data, calculating a position movement vector of an environmental object corresponding to each point cloud data in a continuous scanning period and a change of a point cloud clustering form parameter, and dividing the original environmental characteristic data into static environmental characteristic data and dynamic interference characteristic data according to a preset static threshold value and a dynamic threshold value; Continuously tracking feature points corresponding to the dynamic interference feature data, recording a space coordinate sequence of the feature points in a continuous scanning period, calculating the instantaneous movement speed based on the time difference of the coordinate sequence, and filtering the feature points judged to be dynamic from the environment feature original data; based on a preset point cloud geometric feature and spatial distribution classification model, processing pure static environment feature data from which dynamic interference feature data are removed, wherein the classification model divides the pure static environment feature data into ground feature data, wall feature data and fixed facility feature data according to normal vectors, curvatures, heights of the point cloud and clustering features of the point cloud in the vertical direction and the horizontal direction; Constructing the outline of the classified ground characteristic data, arranging all the point cloud data in the ground characteristic data according to the spatial position distribution of the actual ground, and connecting coordinate points corresponding to the point cloud data with each other by adopting continuous lines to form the ground outline capable of reflecting the ground fluctuation state and the extending range; Respectively constructing outlines of the classified wall body characteristic data and the classified fixed facility characteristic data, distributing and connecting point cloud data in the wall body characteristic data according to the spatial positions of actual walls to form wall body outlines, distributing and connecting the point cloud data in the fixed facility characteristic data according to the spatial positions of actual fixed facilities to form fixed facility outlines; Importing the ground profile, the wall profile and the fixed facility profile into a unified space coordinate system, and performing space superposition and position calibration on the three profiles to form a primary environment profile set according to the space position relation of the actual environment object corresponding to the three profiles; Repeating the steps of feature extraction and contour construction to generate new ground contour data, new wall contour data and new fixed facility contour data according to new three-dimensional point cloud data continuously collected by the rotary imaging radar, and supplementing the new contour data into the initial environment contour set; And removing contour data far away from the detection range of the mobile robot in the preliminary environment contour set, processing the environment contour set after data supplementation and removal based on a preset contour smoothing and patching algorithm, wherein the preset contour smoothing and patching algorithm is used for connecting contour break points smaller than a preset distance threshold, filtering contour protrusions with the size smaller than a preset noise threshold, and completely integrating all the optimized contour data to generate a dynamic local environment map.
- 5. The mobile robot navigation method based on the rotation imaging radar and the satellite positioning according to claim 1, wherein the combining the new positioning gesture data and the dynamic local environment map with the target traveling data of the mobile robot, performing the path planning and traveling control process to generate the initial navigation control command, continuously collecting the subsequent three-dimensional point cloud data, the satellite positioning data and the inertial measurement data through the rotation imaging radar, the satellite positioning module and the inertial measurement unit, repeating the foregoing processing steps to generate the updated positioning gesture data and the updated dynamic local environment map, performing the dynamic adjustment process on the initial navigation control command according to the updated positioning gesture data and the updated dynamic local environment map, generating the adjusted navigation control command and controlling the mobile robot to travel according to the adjusted navigation control command to complete the navigation operation, comprising: acquiring new positioning attitude data, a dynamic local environment map and target running data of the mobile robot, and performing space-time synchronous calibration on the new positioning attitude data, the dynamic local environment map and the target running data of the mobile robot based on a time stamp and a unified space coordinate system to form calibrated navigation basic data, wherein the target running data comprises target point coordinates, a path key point coordinate sequence and a target speed value; Extracting current position characteristic data and current gesture characteristic data of the mobile robot from the calibrated navigation basic data, wherein the current position characteristic data comprises a real-time abscissa, a real-time ordinate and a real-time elevation in a unified space coordinate system, and the current gesture characteristic data comprises a real-time pitching angle, a real-time rolling angle and a real-time heading angle; extracting travelable region feature data and barrier feature data from a dynamic local environment map in the calibrated navigation basic data, wherein the travelable region feature data comprises polygon vertex coordinates forming a boundary of a travelable region, and the barrier feature data comprises bounding box vertex coordinates of all barriers and the size of an external cube thereof; comparing the current position characteristic data with the target running position coordinates, calculating the running direction and the running distance of the mobile robot from the current position to the target running position, and defining a plurality of candidate running paths in the calculated running direction and running distance range by combining the running area characteristic data; Carrying out spatial position analysis on the defined multiple candidate driving paths by combining the obstacle characteristic data, analyzing the spatial position relation between each candidate driving path and the obstacle, removing the candidate driving paths which are spatially overlapped with the obstacle, and reserving the candidate driving paths which keep safe driving distances with all the obstacles; Based on preset path smoothness constraint and dynamics constraint, combining the current attitude characteristic data, target running track requirements in target running data and target running speed related data, adjusting reserved candidate running paths, and calculating and adjusting curvature and course angle change rate of each candidate running path so as to meet the minimum turning radius limit of the mobile robot and match steering capacity at the target running speed; selecting a path which has the shortest running distance and the smallest gesture adjustment range and completely meets the requirements of the target running track from the optimized candidate running paths as an optimal running path, and extracting all contour coordinates and running parameters of the optimal running path; Generating an initial navigation control instruction according to the profile coordinates and the running parameters of the optimal running path, wherein the initial navigation control instruction comprises running direction adjustment data, running speed control data and running angle correction data of the mobile robot; Continuously acquiring follow-up three-dimensional point cloud data, satellite positioning data and inertial measurement data through a rotary imaging radar, a satellite positioning module and an inertial measurement component, repeating all processing steps, and generating updated positioning attitude data and an updated dynamic local environment map; And correcting the driving direction adjustment data and the driving angle correction data in the initial navigation control instruction according to the updated positioning posture data, adjusting the driving speed control data and the driving path planning data in the initial navigation control instruction according to the updated dynamic local environment map, integrating all the adjusted control data to generate an adjusted navigation control instruction, and transmitting the adjusted navigation control instruction to a driving control part of the mobile robot.
- 6. The mobile robot navigation method based on rotation imaging radar and satellite positioning according to claim 2, wherein comparing the local space coordinate system parameter corresponding to the feature point set of the environmental profile with the unified space coordinate system parameter, calculating the origin deviation and coordinate axis direction deviation between the local space coordinate system and the unified space coordinate system, constructing a coordinate conversion relation according to the calculated deviation value, substituting all feature point coordinates in the feature point set of the environmental profile into the coordinate conversion relation to complete conversion, and comprising: Acquiring an environment contour feature point set and a unified space coordinate system, extracting corresponding local space coordinate system parameters from the environment contour feature point set, and extracting origin coordinates, coordinate axis extending directions and coordinate scale intervals of the local space coordinate system to form a local space coordinate system complete parameter; Extracting complete parameters of the uniform space coordinate system from the uniform space coordinate system, and extracting original point coordinates, coordinate axis extending directions and coordinate scale intervals of the uniform space coordinate system to form the complete parameters of the uniform space coordinate system; Comparing the integral parameters of the local space coordinate system with the integral parameters of the unified space coordinate system one by one, firstly comparing the original point coordinates of the integral parameters of the local space coordinate system with the original point coordinates of the unified space coordinate system, calculating and recording three-dimensional coordinate deviation values of the original point coordinates of the local space coordinate system relative to the original point coordinates of the unified space coordinate system; Continuously comparing the coordinate axis extending directions of the local space coordinate system and the unified space coordinate system, respectively calculating and recording the angle deviation value of each coordinate axis of the local space coordinate system relative to the corresponding coordinate axis of the unified space coordinate system; comparing the coordinate scale intervals of the local space coordinate system and the unified space coordinate system, calculating and recording the proportionality coefficients of the coordinate scale intervals of the local space coordinate system and the coordinate scale intervals of the unified space coordinate system; Constructing a coordinate conversion relation according to the recorded coordinate system origin deviation value, coordinate axis angle deviation value and coordinate scale scaling coefficient, wherein the coordinate conversion relation comprises an origin translation rule, a coordinate axis rotation rule and a scale scaling rule and is integrated into a complete coordinate conversion formula; Extracting the local space coordinate system coordinates of all the feature points from the environment contour feature point set, and independently extracting the three-dimensional coordinate data of each feature point and sequentially arranging the three-dimensional coordinate data to form a feature point local coordinate sequence; substituting each characteristic point coordinate in the characteristic point local coordinate sequence into a coordinate conversion formula which is completed by construction, and calculating each characteristic point coordinate in sequence according to an origin translation rule, a coordinate axis rotation rule and a scale scaling rule; verifying the three-dimensional coordinate data of each feature point obtained by calculation based on a preset effective coordinate range of a unified space coordinate system, and carrying out iterative calculation on the coordinate data exceeding the preset effective coordinate range according to a coordinate conversion formula until the coordinate data falls into the effective coordinate range; And re-associating the corrected unified coordinate data of the feature points with the feature point grouping information in the environment contour feature point set, so that the unified coordinate data of each feature point corresponds to the original static object grouping, and integrating and finishing all the associated data to finish the coordinate conversion of the environment contour feature point set.
- 7. The mobile robot navigation method based on the rotation imaging radar and the satellite positioning according to claim 3, wherein the obtaining three-dimensional point cloud data continuously collected by the rotation imaging radar after the satellite positioning signal fails, extracting displacement characteristics of the three-dimensional point cloud data, extracting spatial position variation and travel direction variation tracks of the mobile robot in a continuous scanning period of the rotation imaging radar, and integrating and sorting to form relative displacement characteristic data comprises the following steps: Three-dimensional point cloud data continuously collected by the rotary imaging radar after satellite positioning signals fail are obtained, the three-dimensional point cloud data are divided according to scanning periods of the rotary imaging radar, and the three-dimensional point cloud data collected in each scanning period are used as an independent data unit to form a three-dimensional point cloud data unit sequence which is arranged according to the scanning periods; extracting three-dimensional point cloud data units of two adjacent scanning periods from the three-dimensional point cloud data unit sequence, respectively marking the three-dimensional point cloud data units as a point cloud data unit of a previous period and a point cloud data unit of a later period, and synchronously importing the three-dimensional point cloud data units; performing environment characteristic matching on the previous period point cloud data unit and the later period point cloud data unit, extracting the same static environment characteristic points in the two data units as reference matching points, and recording the coordinate data of each reference matching point in the two data units; The coordinate data of the reference matching points are used as references, the relative positions of the reference matching points in two data units are compared, displacement components of the mobile robot along each coordinate axis in the unified space coordinate system in the previous scanning period to the next scanning period are calculated, and all the displacement components are integrated to obtain single-period space displacement in the adjacent scanning period; repeating the reference matching and displacement calculating steps for all adjacent periodic point cloud data units in the three-dimensional point cloud data unit sequence, calculating single-period space displacement of each adjacent scanning period, and arranging all the single-period space displacement according to the scanning period sequence; extracting a running direction change track of the mobile robot from the arranged single-period space displacement, determining the running direction of the mobile robot in each scanning period according to the three-dimensional coordinate change of each single-period space displacement, and connecting the running directions of all the scanning periods in time sequence; Processing a running direction change track formed by connection based on a preset track filtering algorithm, removing discrete points with direction angle change exceeding a preset mutation threshold value in the track, and recording a scanning period and a space displacement corresponding to each direction change point; Correlating the single-period space displacement quantity which is arranged in sequence with the running direction change track after the smoothing treatment, so that the single-period space displacement quantity of each scanning period corresponds to a specific direction point in the running direction change track; Calculating the accumulated space position change quantity of the mobile robot from the satellite positioning signal failure to the current moment according to the associated displacement quantity and the travel direction data, and integrating the accumulated space position change quantity with the single-period space displacement quantity and the travel direction change track; All the integrated displacement data are converted into uniform length units, all the driving direction data are converted into uniform angle units, and the displacement and direction data after unit unification are subjected to format rule shaping to form relative displacement characteristic data.
- 8. The mobile robot navigation method based on the rotation imaging radar and the satellite positioning according to claim 4, wherein the feature data extraction is performed on the three-dimensional point cloud data sequence after the signal filtering, each point cloud data in the three-dimensional point cloud data sequence is analyzed point by point, the point cloud coordinate data and the point cloud density data reflecting the feature of the environmental object are extracted, and the extracted point cloud coordinate data and the extracted point cloud density data are integrated and arranged to form the original data of the environmental feature, and the method comprises the following steps: Acquiring a three-dimensional point cloud data sequence subjected to signal filtering, analyzing the three-dimensional point cloud data sequence point by point, reading three-dimensional space coordinate information of each point cloud data, wherein the coordinate information comprises an abscissa value, an ordinate value and an elevation value, and independently recording the coordinate information of each point cloud data; Carrying out peripheral point cloud density statistics on each point cloud data in the three-dimensional point cloud data sequence, defining a space region of a fixed range by taking each point cloud data as a center, and counting the number of other point cloud data contained in the space region as a point cloud density value of the point cloud data; The recorded point cloud coordinate information and the counted point cloud density value are associated one by one, so that the three-dimensional space coordinate information of each point cloud data corresponds to the unique point cloud density value, and an associated data pair of the point cloud coordinate and the density is formed; Arranging all the associated data pairs according to the original sequence of the three-dimensional point cloud data sequence, and removing the associated data pairs with the point cloud density value lower than the preset effective density lower limit or higher than the preset effective density upper limit; based on a clustering relation of space coordinates, the associated data pairs with Euclidean distance smaller than a preset distance threshold value are aggregated into the same cluster, and each cluster of data corresponds to a potential environment object area and is subjected to cluster marking; carrying out cluster feature calculation on the associated data pair marked by each cluster, calculating a three-dimensional coordinate mean vector, a covariance matrix and average point cloud density of each cluster of data, and associating the cluster feature parameters obtained by calculation with the cluster data; Integrating the point cloud data after the association of all the intra-cluster characteristic parameters, and reserving all original association data pairs in each cluster data and the corresponding intra-cluster characteristic parameters; Converting the integrated point cloud coordinate data into coordinate values under a unified space coordinate system, and converting the point cloud density data into a unified statistical unit so that the data are consistent on a coordinate reference and the statistical unit; Rearranging the point cloud data with unified units and coordinates from near to far according to the spatial positions, so that the data can reflect the distribution condition of the surrounding environment of the mobile robot from near to far; And (3) carrying out complete integration on the rearranged point cloud data, and reserving coordinate information, density information and space grouping information of all the point cloud data to form environment characteristic original data.
- 9. The mobile robot navigation method based on rotational imaging radar and satellite positioning according to claim 5, wherein the spatial position analysis is performed on the defined plurality of candidate travel paths in combination with the obstacle feature data, the spatial position relation between each candidate travel path and the obstacle is analyzed, the candidate travel paths spatially overlapped with the obstacle are removed, and the candidate travel paths keeping safe travel distances with all the obstacles are reserved, comprising: acquiring a plurality of defined candidate driving path data and extracted obstacle characteristic data, extracting single candidate driving path data from the plurality of candidate driving path data, extracting all contour coordinate points and path width data of the path, and integrating; Extracting feature data of a single obstacle from the feature data of the obstacle, extracting all contour coordinate points, position center coordinates and outline dimension data of the obstacle, and integrating the contour coordinate points, the position center coordinates and the outline dimension data; Determining the space coverage of the path according to the contour coordinate points and the path width data of the single candidate driving path, and determining the space coverage of the obstacle according to the contour coordinate points and the outline dimension data of the obstacle; Calculating the intersection area between the space coverage polygon of the path and the space coverage polygon of the obstacle, if the intersection area is larger than zero, judging that space overlapping exists, and marking the candidate running path as a disqualified path; When the path and the obstacle do not overlap spatially, calculating the minimum space distance between the path and the obstacle, and comparing the calculated minimum space distance with a preset static safety distance threshold value, wherein the minimum space distance is smaller than the static safety distance threshold value, the candidate running path is marked as an unqualified path, and the minimum space distance is larger than or equal to a safety running distance standard, and the candidate running path is marked as a qualified path; Repeating the steps of space position analysis and qualification judgment of all single obstacles on single candidate driving path data, reserving the path if the judgment results of all the obstacles are qualified, and removing the path if any one of the paths is unqualified; Repeating the screening steps for all single candidate travel path data in the plurality of candidate travel path data, respectively completing space position analysis and qualification judgment of each candidate travel path and all obstacles, and recording the final screening result of each path; summarizing qualified paths in all screening results, extracting complete data of all qualified paths, including path contour coordinate points, path width data and driving direction data, and integrating and sorting; And uniformly converting the coordinate data of all the qualified paths into coordinate values under a uniform space coordinate system to enable the coordinate references of all the qualified paths to be consistent, and sequencing the qualified path data after the calibration of the coordinates from short to long according to the driving distance.
- 10. A mobile robotic navigation system based on rotational imaging radar and satellite positioning, comprising: A processor; a machine-readable storage medium storing machine-executable instructions for the processor; wherein the processor is configured to perform the mobile robot navigation method based on rotational imaging radar and satellite positioning of any one of claims 1 to 9 via execution of the machine executable instructions.
Description
Mobile robot navigation method and system based on rotation imaging radar and satellite positioning Technical Field The invention relates to the technical field of mobile robot navigation, in particular to a mobile robot navigation method and system based on a rotation imaging radar and satellite positioning. Background At the moment of the rapid development of mobile robot technology, the navigation capability of mobile robots has become a key factor for determining the application extent and performance quality of the mobile robots. Currently, common mobile robot navigation methods rely primarily on a single type of sensor data. For example, although the navigation method based on satellite positioning can provide relatively accurate global position information for the mobile robot in an open environment, in some special scenes, such as urban canyons, indoor environments or underground spaces of high-rise forests, satellite signals are easily blocked, positioning signals are invalid, continuous and reliable positioning information cannot be provided for the robot, and the accuracy and stability of navigation are seriously affected. Although the navigation method based on the laser radar and other sensors can acquire the local information of the surrounding environment of the robot and construct a local map for navigation, the sensors can only acquire the environment data in a specific direction or a limited range, and for complex and changeable dynamic environments, surrounding dynamic interference factors are difficult to comprehensively and timely sense, and in the movement process of the robot, the problems of data loss or error accumulation are easy to occur, so that the navigation precision and safety are affected. In addition, the existing navigation method often lacks an effective data fusion and cooperative processing mechanism when processing sensor data, and cannot fully utilize the advantages of different sensor data, so that high-precision and high-reliability navigation cannot be realized in a complex environment. Disclosure of Invention In view of the above-mentioned problems, in combination with the first aspect of the present invention, an embodiment of the present invention provides a mobile robot navigation method based on a rotational imaging radar and satellite positioning, the method comprising: Starting a rotary imaging radar, a satellite positioning module and an inertial measurement component which are carried by the mobile robot, enabling the rotary imaging radar to enter an omnibearing continuous rotary scanning state to acquire three-dimensional point cloud data of the surrounding environment of the mobile robot, enabling the satellite positioning module to synchronously receive positioning signals transmitted by satellites to generate satellite positioning data, and enabling the inertial measurement component to synchronously sense the motion state of the mobile robot to generate inertial measurement data; Extracting characteristic information in the three-dimensional point cloud data, the satellite positioning data and the inertial measurement data, completing coordinate system unified conversion, establishing mutual correlation of the characteristic information, and integrating all the correlated characteristic information to generate positioning attitude data of the mobile robot; According to the three-dimensional point cloud data continuously collected by the rotary imaging radar, extracting static environment characteristic data and dynamic interference characteristic data in the surrounding environment of the mobile robot, integrating the rest static environment characteristic data after removing the dynamic interference characteristic data, distributing and arranging the integrated static environment characteristic data according to the actual space position, and generating a dynamic local environment map of the surrounding environment of the mobile robot; Identifying the transmission state of satellite positioning signals according to satellite positioning data continuously collected by a satellite positioning module, when the satellite positioning signals fail, executing dead reckoning processing according to three-dimensional point cloud data continuously collected by a rotary imaging radar and inertial measurement data continuously collected by an inertial measurement component, combining positioning posture data generated before the satellite positioning signals fail, generating dead reckoning positioning data and supplementing the dead reckoning positioning data to the positioning posture data, and replacing the satellite positioning related data which fail in the positioning posture data to form new positioning posture data; Combining the new positioning attitude data and the dynamic local environment map with the target running data of the mobile robot, executing path planning and running control processing to generate an initial navigation control instruction, c