Search

CN-121979220-A - Robot environment sensing method and system based on multi-sensor fusion

CN121979220ACN 121979220 ACN121979220 ACN 121979220ACN-121979220-A

Abstract

The invention relates to the technical field of robot environment interaction, in particular to a robot environment sensing method and a system based on multi-sensor fusion, comprising the following steps: receiving a robot environment sensing instruction, detecting the current environment by using the target robot and the robot environment sensing instruction to obtain first frame data, constructing an optimal ground plane model, obtaining a non-ground obstacle point cloud coordinate set, predicting an obstacle prediction position set, performing obstacle scanning on the current environment to obtain a second frame tracking list, obtaining a destination coordinate, an obstacle safety function set and a complete obstacle track set based on the second frame tracking list, confirming that the mobile robot is completed according to the destination coordinate, the obstacle safety function set and the complete obstacle track set, and completing the robot environment sensing based on multi-sensor fusion based on the completed mobile robot. The invention can improve the perception navigation efficiency, perception obstacle avoidance accuracy and autonomous perception intelligence of the robot in a complex dynamic environment.

Inventors

  • FU GUOHE

Assignees

  • 深圳悟时创新科技有限公司

Dates

Publication Date
20260505
Application Date
20260205

Claims (10)

  1. 1.A method for sensing a robot environment based on multi-sensor fusion, the method comprising: confirming a target robot and a current environment, wherein the target robot comprises a three-dimensional laser radar, an inertial measurement unit and a vision sensor; receiving a robot environment sensing instruction, and detecting the current environment by using the target robot and the robot environment sensing instruction to obtain first frame data, wherein the first frame data comprises a point cloud coordinate set; Constructing an optimal ground plane model based on the point cloud coordinate set, and acquiring a non-ground obstacle point cloud coordinate set based on the optimal ground plane model; predicting an obstacle prediction position set based on the non-ground obstacle point cloud coordinate set, and performing obstacle scanning on the current environment by using the target robot and the obstacle prediction position set to obtain a second frame tracking list; Acquiring a destination coordinate, an obstacle safety function set and a complete obstacle track set based on the second frame tracking list, and confirming that the mobile robot is completed according to the destination coordinate, the obstacle safety function set and the complete obstacle track set; and completing the robot environment sensing based on the multi-sensor fusion based on the completed mobile robot.
  2. 2. The method for sensing the robot environment based on the multi-sensor fusion according to claim 1, wherein the constructing the optimal ground plane model based on the point cloud coordinate set comprises: Acquiring a maximum abscissa value, a maximum ordinate value and a maximum ordinate value from the point cloud coordinate set, and constructing a three-dimensional point cloud space based on the maximum abscissa value, the maximum ordinate value and the maximum ordinate value; Dividing a three-dimensional point cloud space by using preset voxel sizes to obtain a voxel set, sequentially extracting point cloud coordinates from the point cloud coordinate set, calculating coordinate voxel indexes based on the extracted point cloud coordinates and the voxel sizes, confirming a target voxel in the voxel set according to the coordinate voxel indexes, and distributing the extracted point cloud coordinates to the target voxel to obtain distributed voxels; Summarizing the allocated voxels to obtain an allocated voxel set, wherein the allocated voxel set comprises a plurality of allocated voxels, and each allocated voxel comprises zero, one or a plurality of point cloud coordinates; Sequentially extracting allocated voxels from the allocated voxel set, and judging whether point cloud coordinates exist in the extracted allocated voxels; if the point cloud coordinates exist in the extracted allocated voxels, acquiring a voxel point cloud coordinate set of the extracted allocated voxels, calculating the geometric centroid coordinates of the voxel point cloud coordinate set, and replacing the voxel point cloud coordinate set by utilizing the geometric centroid coordinates to obtain updated point cloud coordinates; and summarizing and updating the point cloud coordinates to obtain a down-sampling point cloud coordinate set, and constructing an optimal ground plane model based on the down-sampling point cloud coordinate set.
  3. 3. The method for sensing the robot environment based on the multi-sensor fusion according to claim 2, wherein the constructing the optimal ground plane model based on the down-sampling point cloud coordinate set comprises: Randomly extracting a random point cloud coordinate set from the down-sampling point cloud coordinate set in sequence, and removing the random point cloud coordinate set from the down-sampling point cloud coordinate set to obtain a rest point cloud coordinate set; Constructing a plane model based on the extracted random point cloud coordinate set, calculating the plane distance between each of the rest point cloud coordinates in the rest point cloud coordinate set and the plane model to obtain a plane distance set, sequentially extracting the plane distances from the plane distance set, and taking the rest point cloud coordinates corresponding to the extracted plane distances as in-plane point cloud coordinates if the extracted plane distances are smaller than a preset distance threshold; Summarizing the in-plane point cloud coordinates to obtain an in-plane point cloud coordinate set corresponding to the plane distance set, and calculating the number of the in-plane point cloud coordinates according to the in-plane point cloud coordinate set; removing the point cloud coordinate set in the plane from the rest point cloud coordinate sets to obtain an updated point cloud coordinate set, and taking the updated point cloud coordinate set as a down-sampling point cloud coordinate set; the step of randomly extracting the random point cloud coordinate set from the down-sampling point cloud coordinate set sequentially is returned, and the execution times of randomly extracting the random point cloud coordinate set from the down-sampling point cloud coordinate set sequentially are obtained until the execution times are equal to the preset normal execution times; And summarizing the number of the point cloud coordinates in the plane to obtain a number set of the point cloud coordinates in the plane, and taking a plane model corresponding to the largest number of the point cloud coordinates in the plane in the number set of the point cloud coordinates in the plane as an optimal ground plane model.
  4. 4. The method for sensing the robot environment based on the multi-sensor fusion according to claim 3, wherein the acquiring the non-ground obstacle point cloud coordinate set based on the optimal ground plane model comprises: Removing the corresponding in-plane point cloud coordinate set in the optimal ground plane model from the downsampled point cloud coordinate set to obtain a non-ground point cloud coordinate set, sequentially extracting non-ground point cloud coordinates from the non-ground point cloud coordinate set, removing the extracted non-ground point cloud coordinates from the non-ground point cloud coordinate set to obtain a rest non-ground point cloud coordinate set, and searching out a nearest neighbor coordinate set from the rest non-ground point cloud coordinate set; Calculating nearest neighbor distances between the extracted non-ground point cloud coordinates and each nearest neighbor coordinate in the nearest neighbor coordinate set to obtain a nearest neighbor distance set, and calculating a nearest average distance based on the nearest neighbor distance set; summarizing the nearest average distance to obtain a nearest average distance set, calculating an average distance mean value and an average distance standard deviation based on the nearest average distance set, and calculating a standard deviation multiple threshold according to the average distance mean value and the average distance standard deviation; Sequentially extracting the nearest average distance from the nearest average distance set, and taking the non-ground point cloud coordinates corresponding to the extracted nearest average distance as outlier noise coordinates if the extracted nearest average distance is larger than a standard deviation multiple threshold; summarizing the outlier noise coordinates to obtain an outlier noise coordinate set, and removing the outlier noise coordinate set from the non-ground point cloud coordinate set to obtain a non-ground obstacle point cloud coordinate set.
  5. 5. The method for sensing the robot environment based on the multi-sensor fusion according to claim 4, wherein the predicting the set of predicted positions of the obstacle based on the set of non-ground obstacle point cloud coordinates comprises: performing clustering operation on the non-ground obstacle point cloud coordinate set to obtain an obstacle cluster, wherein the obstacle cluster comprises a plurality of obstacle clusters, and one obstacle cluster comprises a plurality of non-ground obstacle point cloud coordinates; Sequentially extracting an obstacle cluster from the obstacle cluster, and confirming three-dimensional bounding box parameters and obstacle substance center coordinates according to a plurality of non-ground obstacle point cloud coordinates in the obstacle cluster, wherein the three-dimensional bounding box parameters comprise a maximum transverse value, a minimum transverse value, a maximum longitudinal value, a minimum longitudinal value, a maximum vertical value and a minimum vertical value; constructing a three-dimensional bounding box by utilizing three-dimensional bounding box parameters, and calculating the size of the obstacle according to the three-dimensional bounding box, wherein the size of the obstacle comprises the length of the obstacle, the width of the obstacle and the height of the obstacle; Confirming static characteristic parameters of the obstacle based on the heart coordinates of the obstacle and the size of the obstacle, and summarizing the static characteristic parameters of the obstacle to obtain a static characteristic parameter set of the obstacle; creating an obstacle tracker set according to the obstacle static characteristic parameter set, wherein the obstacle trackers are in one-to-one correspondence with the obstacle static characteristic parameters; storing the obstacle tracker set to obtain a first frame tracking list, wherein the first frame tracking list comprises a plurality of tracked obstacles; And carrying out position prediction on each tracked obstacle in the first frame tracking list to obtain an obstacle prediction position set.
  6. 6. The method for sensing the environment of the robot based on the multi-sensor fusion according to claim 5, wherein the performing the obstacle scanning on the current environment by using the target robot and the set of predicted positions of the obstacle to obtain the second frame tracking list comprises: Performing dynamic obstacle scanning on the current environment by using the target robot to obtain second frame data, wherein the second frame data comprises a plurality of current frame detection obstacles; Confirming a current observation position set from the second frame data, wherein the current frame detection barriers are in one-to-one correspondence with the current observation positions; Sequentially extracting tracking obstacles from the first frame tracking list, and determining a target obstacle predicted position from the obstacle predicted position set according to the extracted tracking obstacles; combining the predicted position of the target obstacle with a plurality of current frame detection obstacles to obtain a matched combination set, wherein the matched combination set comprises a plurality of matched combinations, and each matched combination comprises the predicted position of the target obstacle and the current observation position; calculating an association cost set based on the matching combination set, wherein the association cost corresponds to the matching combination one by one; Confirming the minimum association cost from the association cost set, if the minimum association cost is smaller than a preset maximum association distance threshold, taking a matching combination corresponding to the minimum association cost as an optimal matching combination, removing the optimal matching combination from a plurality of current frame detection obstacles and a first frame tracking list respectively to obtain a plurality of updated detection obstacles and an updated first frame tracking list, taking the plurality of updated detection obstacles as a plurality of current frame detection obstacles, taking the updated first frame tracking list as a first frame tracking list, and returning to the step of extracting tracking obstacles from the first frame tracking list in sequence until the tracking obstacles in the first frame tracking list are completely extracted; if the minimum association cost is greater than the maximum association distance threshold, taking the matching combination corresponding to the minimum association cost as an unmatched combination; Summarizing the optimal matching combination and the unmatched combination respectively to obtain an optimal matching combination set and an unmatched combination set, confirming a new obstacle set and a lost tracking obstacle set from the unmatched combination set, calculating the lost times according to the lost tracking obstacle set, and storing the lost times into a first frame tracking list to obtain an updating tracking list; Obtaining an obstacle set at the current observation position according to the optimal matching combination set, calculating a current observation obstacle substance heart coordinate set based on the obstacle set at the current observation position, inputting the current observation obstacle centroid coordinate set into a pre-constructed Kalman filter to obtain an updated tracking data set, initializing each new obstacle in the new obstacle set to obtain an initialized parameter set, and updating an un-updated tracking list by using the updated tracking data set and the initialized parameter set to obtain a second frame tracking list, wherein the second frame tracking list comprises a plurality of updated tracking obstacles.
  7. 7. The method for sensing a robot environment based on multi-sensor fusion according to claim 6, wherein the acquiring the destination coordinates, the obstacle safety function set, and the complete obstacle orbit set based on the second frame tracking list comprises: Sequentially extracting updated tracking obstacles from a plurality of updated tracking obstacles in a second frame tracking list, and acquiring the latest state vector according to the extracted updated tracking obstacles; confirming a predicted time period set, and executing the following operation on each predicted time period in the predicted time period set: Calculating a next-moment predicted state vector based on the predicted time period and the latest state vector, acquiring a next-moment obstacle predicted position according to the next-moment predicted state vector, and summarizing the next-moment obstacle predicted position to obtain a next-moment obstacle predicted position set corresponding to the predicted time period set; According to the extracted updated tracking obstacle, a historical obstacle position set is obtained, and the historical obstacle position set and a predicted obstacle position set of the next moment are respectively sequenced according to a time sequence from the beginning to the end, so that a historical obstacle position sequence and a predicted obstacle position sequence are obtained; Generating a historical obstacle moving track and a predicted obstacle moving track based on the historical obstacle position sequence and the predicted obstacle position sequence, and splicing the historical obstacle moving track and the predicted obstacle moving track to obtain a complete obstacle track; the latest obstacle prediction coordinates are confirmed according to the complete obstacle track, the radius of the obstacle is obtained according to the extracted updated tracking obstacle, and the current coordinates and the destination coordinates of the target robot are obtained; And constructing an obstacle safety function according to the latest obstacle prediction coordinates, the obstacle radius and the current coordinates of the target robot, and respectively summarizing the obstacle safety function and the complete obstacle track to obtain an obstacle safety function set and a complete obstacle track set.
  8. 8. The method of claim 7, wherein the determining that the mobile robot is completed based on the destination coordinates, the obstacle safety function set, and the complete obstacle track set comprises: Constructing a two-dimensional grid map according to the current environment, and constructing a global optimal moving path according to the two-dimensional grid map, the complete obstacle orbit set, the current coordinates of the target robot and the destination coordinates; constructing constraint conditions, and inputting the barrier safety function set and the constraint conditions into a pre-constructed objective function to obtain an objective function to be optimized; Optimizing an objective function to be optimized to obtain a robot control instruction sequence, wherein the robot control instruction sequence comprises a plurality of robot control instructions, the robot control instructions correspond to updated tracking obstacles one by one, and the robot control instructions comprise linear speed and angular speed; and executing moving operation on the target robot according to the robot control instruction sequence and the global optimal moving path to obtain moving coordinates, and taking the target robot with the moving coordinates equal to the destination coordinates as the completed moving robot when the moving coordinates are equal to the destination coordinates.
  9. 9. The multi-sensor fusion-based robotic environment awareness method of claim 8, wherein the representation of the obstacle safety function is as follows: Wherein, the Representing the safety function of the obstacle, Representing the current coordinates of the target robot, Representing the latest obstacle prediction coordinates, Indicating a preset radius of the robot, Which represents the radius of the obstacle, Indicating a preset safety margin for the safety of the vehicle, Representing the euclidean norm.
  10. 10. A robotic environment awareness system based on multi-sensor fusion, the system comprising: The sensor configuration module is used for confirming a target robot and a current environment, wherein the target robot comprises a three-dimensional laser radar, an inertial measurement unit and a vision sensor; The robot environment sensing module is used for receiving a robot environment sensing instruction, detecting the current environment by using the target robot and the robot environment sensing instruction to obtain first frame data, wherein the first frame data comprises a point cloud coordinate set, constructing an optimal ground plane model based on the point cloud coordinate set, and acquiring a non-ground obstacle point cloud coordinate set based on the optimal ground plane model; The obstacle extraction module is used for predicting an obstacle prediction position set based on a non-ground obstacle point cloud coordinate set, and performing obstacle scanning on the current environment by using the target robot and the obstacle prediction position set to obtain a second frame tracking list; And the robot perception executing module is used for acquiring the destination coordinates, the obstacle safety function set and the complete obstacle track set based on the second frame tracking list, confirming that the mobile robot is finished according to the destination coordinates, the obstacle safety function set and the complete obstacle track set, and finishing the robot environment perception based on multi-sensor fusion based on the finished mobile robot.

Description

Robot environment sensing method and system based on multi-sensor fusion Technical Field The invention relates to the technical field of robot environment interaction, in particular to a robot environment sensing method and system based on multi-sensor fusion. Background Multisensor fusion is a technique that obtains more accurate, reliable, and comprehensive information than a single sensor by comprehensively processing and analyzing data from multiple sensors of different types or the same type. Robots are automated machines that are capable of sensing an environment, making decisions, and performing actions. Environmental perception refers to the process of real-time monitoring, analysis and understanding of the surrounding environment by a robot through various sensors carried by the robot. In current robotic environment aware applications, multi-sensor fusion methods still face a number of challenges. Under a dynamic scene, sensor data fluctuation caused by factors such as obstacle movement, illumination change, personnel interference and the like makes the conventional fusion algorithm difficult to adapt to environmental change in real time, and causes insufficient stability of a sensing result. In addition, the prior art focuses on the fusion of the environment perception level, and an effective mechanism for efficiently mapping the fusion result to the robot perception decision is not established, so that the robot perception response is lagged, and the perception efficiency is limited. Therefore, how to improve the perception navigation efficiency, the perception obstacle avoidance accuracy and the autonomous perception intelligence of the robot in a complex dynamic environment is a technical problem which needs to be solved urgently. Disclosure of Invention The invention provides a robot environment sensing method based on multi-sensor fusion and a computer readable storage medium, which mainly aim to improve the sensing navigation efficiency, sensing obstacle avoidance accuracy and autonomous sensing intelligence of a robot in a complex dynamic environment. In order to achieve the above object, the present invention provides a method for sensing a robot environment based on multi-sensor fusion, comprising: confirming a target robot and a current environment, wherein the target robot comprises a three-dimensional laser radar, an inertial measurement unit and a vision sensor; receiving a robot environment sensing instruction, and detecting the current environment by using the target robot and the robot environment sensing instruction to obtain first frame data, wherein the first frame data comprises a point cloud coordinate set; Constructing an optimal ground plane model based on the point cloud coordinate set, and acquiring a non-ground obstacle point cloud coordinate set based on the optimal ground plane model; predicting an obstacle prediction position set based on the non-ground obstacle point cloud coordinate set, and performing obstacle scanning on the current environment by using the target robot and the obstacle prediction position set to obtain a second frame tracking list; Acquiring a destination coordinate, an obstacle safety function set and a complete obstacle track set based on the second frame tracking list, and confirming that the mobile robot is completed according to the destination coordinate, the obstacle safety function set and the complete obstacle track set; and completing the robot environment sensing based on the multi-sensor fusion based on the completed mobile robot. Optionally, the constructing the optimal ground plane model based on the point cloud coordinate set includes: Acquiring a maximum abscissa value, a maximum ordinate value and a maximum ordinate value from the point cloud coordinate set, and constructing a three-dimensional point cloud space based on the maximum abscissa value, the maximum ordinate value and the maximum ordinate value; Dividing a three-dimensional point cloud space by using preset voxel sizes to obtain a voxel set, sequentially extracting point cloud coordinates from the point cloud coordinate set, calculating coordinate voxel indexes based on the extracted point cloud coordinates and the voxel sizes, confirming a target voxel in the voxel set according to the coordinate voxel indexes, and distributing the extracted point cloud coordinates to the target voxel to obtain distributed voxels; Summarizing the allocated voxels to obtain an allocated voxel set, wherein the allocated voxel set comprises a plurality of allocated voxels, and each allocated voxel comprises zero, one or a plurality of point cloud coordinates; sequentially extracting allocated voxels from the allocated voxel set, and obtaining a voxel point cloud coordinate set of the extracted allocated voxels; if the point cloud coordinates exist in the extracted allocated voxels, calculating the geometric centroid coordinates of the voxel point cloud coordinate set, and replacing the vo