Search

CN-115272730-B - Method for removing dynamic point of autonomous mobile platform, system and equipment thereof

CN115272730BCN 115272730 BCN115272730 BCN 115272730BCN-115272730-B

Abstract

A method for removing dynamic points for an autonomous mobile platform, and a system and apparatus thereof. The method for removing dynamic points of the autonomous mobile platform comprises the steps of obtaining image information and point cloud information which are respectively collected at a current frame time stamp and a previous frame time stamp by a camera and a radar configured on the autonomous mobile platform, correspondingly obtaining current frame image data, previous frame image data and current frame point cloud data, carrying out characteristic point processing on the current frame image data and the previous frame image data according to the relative pose relation between the current frame time stamp and the previous frame time stamp of the autonomous mobile platform to determine a dynamic area in the current frame image data, projecting the current frame point cloud data to an image plane of the camera to take point clouds projected into the dynamic area as potential dynamic points, and removing the potential dynamic points in point clouds which are closer to the camera by clustering the potential dynamic points.

Inventors

  • ZHANG ZE
  • LIU FUCHUN
  • ZHANG HAO

Assignees

  • 舜宇光学(浙江)研究院有限公司

Dates

Publication Date
20260508
Application Date
20210414

Claims (16)

  1. 1. A method for de-dynamically spotting an autonomous mobile platform, comprising the steps of: S100, acquiring image information and point cloud information which are acquired by a camera and a radar configured on the autonomous mobile platform at the same time in a current frame time stamp and a previous frame time stamp respectively to correspondingly acquire current frame image data, previous frame image data and current frame point cloud data, wherein the sampling frequency of the camera is the same as and synchronous with the detection frequency of the radar, and the current frame time stamp and the previous frame time stamp are both time stamps of the radar; S200, processing characteristic points of the current frame image data and the previous frame image data according to the relative pose relation of the autonomous mobile platform between the current frame timestamp and the previous frame timestamp, wherein a conversion matrix of corresponding characteristic points between the current frame image data and the previous frame image data is calculated by taking the relative pose relation as an initial value, transmission conversion is carried out on the previous frame image data and the current frame image data according to the conversion matrix, so that the light intensity variation of each pixel in the current frame image data after transmission conversion is obtained, and a pixel area with the light intensity variation larger than the light intensity threshold is judged to be a continuous dynamic area in the current frame image data by comparing the light intensity variation with the light intensity threshold; S300, projecting the current frame point cloud data to the image plane of the camera to take the point cloud projected into the dynamic region of the current frame image data as potential dynamic point, and And S400, clustering the potential dynamic points to obtain different point clouds, calculating the average distance between each point cloud and the camera according to the depth values of the different point clouds under the camera coordinate system, distinguishing the point cloud with a smaller distance from the camera from the point cloud with a larger distance from the camera, and removing the potential dynamic points in the point cloud with a smaller distance from the camera.
  2. 2. The de-dynamic point method for an autonomous mobile platform according to claim 1, wherein the step S200 comprises the steps of: s210, track deduction is carried out on data acquired by an inertial measurement unit and a wheel speed meter configured on the autonomous mobile platform so as to obtain the relative pose relation of the autonomous mobile platform between the current frame time stamp and the last frame time stamp; s220, performing feature point matching on the current frame image data and the previous frame image data according to the relative pose relationship of the autonomous mobile platform to obtain a conversion matrix of corresponding feature points between the current frame image data and the previous frame image data, and And S230, determining a dynamic region in the current frame image data through transmission transformation according to the conversion matrix of the corresponding feature points.
  3. 3. The de-dynamic point method for an autonomous mobile platform according to claim 2, wherein the step S210 comprises the steps of: Acquiring the pose of the autonomous mobile platform at the previous frame time stamp to serve as the previous frame pose of the autonomous mobile platform; linearly interpolating inertial data acquired via the inertial measurement unit and wheel speed data acquired via the wheel speed meter to obtain a wheel speed and an angular velocity at the current frame time stamp, and And integrating the wheel speed and the angular speed between the previous frame time stamp and the current frame time stamp according to the previous frame pose of the autonomous mobile platform to obtain the current frame pose of the autonomous mobile platform, thereby obtaining the relative pose relation of the autonomous mobile platform.
  4. 4. The de-dynamic point method for an autonomous mobile platform according to claim 2, wherein the step S220 comprises the steps of: respectively carrying out filtering processing on the current frame image data and the previous frame image data; Extracting feature points from the current frame image data and the previous frame image data to obtain each feature point in the current frame image data and the previous frame image data, and And calculating the conversion matrix of the corresponding characteristic point between the current frame image data and the previous frame image data by taking the relative pose relationship of the autonomous mobile platform as an initial value.
  5. 5. The de-dynamic point method for an autonomous mobile platform of claim 4, wherein the dynamic feature points in the current frame image data are removed while the transformation matrix is calculated by RANSC matching models.
  6. 6. The de-dynamic point method for an autonomous mobile platform according to claim 2, wherein the step S230 comprises the steps of: Performing transmission transformation on the previous frame image data and the current frame image data according to the transformation matrix of the corresponding feature points to obtain the light intensity variation of each pixel in the current frame image data after transmission transformation; comparing whether the light intensity variation of each pixel in the current frame image data after transmission conversion is larger than a light intensity threshold value, and And judging the pixel area with the light intensity variation larger than the light intensity threshold value as the dynamic area of the current frame image data.
  7. 7. The de-dynamic point method for an autonomous mobile platform according to claim 1, wherein the step S300 comprises the steps of: Projecting all laser points in the current frame point cloud data to the image plane of the camera to obtain pixel points corresponding to all the laser points in the current frame image data; respectively judging whether the pixel point corresponding to each laser point is in the dynamic region or not, and And in response to a pixel point corresponding to a certain laser point being in the dynamic region, using the certain laser point as the potential dynamic point.
  8. 8. The de-dynamic point method for an autonomous mobile platform as claimed in any of claims 1 to 7, wherein the step S400 includes the steps of: clustering the potential dynamic points corresponding to different dynamic areas to obtain different point clouds; Calculating an average distance between each point cloud cluster and the camera according to the depth value in the point cloud data of the current frame, and The potential dynamic point within the point cloud cluster that is closer to the camera is removed from the current frame point cloud data.
  9. 9. A de-dynamic point system for an autonomous mobile platform having a camera and radar disposed thereon, wherein the de-dynamic point system for an autonomous mobile platform comprises communicatively coupled to each other: The system comprises a camera, a radar, a data acquisition module, a data processing module and a data processing module, wherein the camera is used for acquiring image information and point cloud information which are acquired by the camera and the radar at the same time in a current frame time stamp and a previous frame time stamp respectively so as to correspondingly obtain current frame image data, previous frame image data and current frame point cloud data; The system comprises a current frame image data, a last frame image data, a characteristic point processing module, a light intensity threshold value, a light intensity change amount and a dynamic region, wherein the current frame image data and the last frame image data are subjected to characteristic point processing according to the relative pose relation of the autonomous mobile platform between the current frame time stamp and the last frame time stamp to determine the dynamic region in the current frame image data; A point cloud projection module for projecting the current frame point cloud data onto the image plane of the camera to take the point cloud projected into the dynamic region of the current frame image data as potential dynamic point, and And the clustering processing module is used for clustering the potential dynamic points to obtain different point clouds, calculating the average distance between each point cloud and the camera according to the depth values of the different point clouds under the camera coordinate system, distinguishing the point cloud which is closer to the camera from the point cloud which is farther from the camera, and removing the potential dynamic points in the point cloud which is closer to the camera.
  10. 10. The de-dynamic point system for an autonomous mobile platform of claim 9, wherein said feature point processing module comprises a track deduction module, a feature point matching module, and a region determination module communicatively connected to each other, wherein said track deduction module is configured to track deduct data collected via an inertial measurement unit and a wheel speed meter configured on the autonomous mobile platform to obtain the relative pose relationship of the autonomous mobile platform between the current frame timestamp and the previous frame timestamp; The characteristic point matching module is used for carrying out characteristic point matching on the current frame image data and the previous frame image data according to the relative pose relation of the autonomous mobile platform so as to obtain a conversion matrix of corresponding characteristic points between the current frame image data and the previous frame image data; The region determining module is used for determining a dynamic region in the current frame image data through transmission transformation according to the conversion matrix of the corresponding feature points.
  11. 11. The de-dynamic point system for an autonomous mobile platform of claim 10, wherein said track deduction module comprises a pose acquisition module, a linear interpolation module and a velocity integration module communicatively connected to each other, wherein said pose acquisition module is configured to acquire a pose of the autonomous mobile platform at the last frame time stamp as a last frame pose of the autonomous mobile platform; The speed integration module is used for integrating the wheel speed and the angular speed between the previous frame time stamp and the current frame time stamp according to the previous frame pose of the autonomous mobile platform so as to obtain the current frame pose of the autonomous mobile platform, and therefore the relative pose relation of the autonomous mobile platform is obtained.
  12. 12. The de-dynamic point system for an autonomous mobile platform of claim 10, wherein said feature point matching module comprises a data filtering module, a feature point extraction module, and a matrix computation module communicatively coupled to each other, wherein said data filtering module is configured to filter the current frame image data and the previous frame image data, respectively; The characteristic point extraction module is used for carrying out characteristic point extraction processing on the filtered current frame image data and the filtered previous frame image data so as to obtain each characteristic point in the filtered current frame image data and the filtered previous frame image data; The matrix calculation module is used for calculating the conversion matrix of the corresponding feature points between the current frame image data and the previous frame image data by taking the relative pose relation of the autonomous mobile platform as an initial value.
  13. 13. The system for removing dynamic points of an autonomous mobile platform as claimed in claim 10, wherein said region determining module comprises a transmission transforming module, a variation comparing module and a region determining module, which are communicatively connected to each other, wherein said transmission transforming module is configured to perform transmission transforming on the previous frame image data and the current frame image data according to the transformation matrix of the corresponding feature points to obtain a light intensity variation of each pixel in the current frame image data after transmission transforming, wherein said variation comparing module is configured to compare whether the light intensity variation of each pixel in the current frame image data after transmission transforming is greater than a light intensity threshold, and wherein said region determining module is configured to determine a pixel region where the light intensity variation is greater than the light intensity threshold as the dynamic region of the current frame image data.
  14. 14. The system for removing dynamic points of an autonomous mobile platform according to any of claims 10 to 13, wherein the point cloud projection module comprises a laser point projection module, a laser point determination module and a dynamic point preliminary screening module which are communicatively connected with each other, wherein the laser point projection module is configured to project all laser points in the current frame point cloud data onto the image plane of the camera to obtain pixel points corresponding to all the laser points in the current frame image data, wherein the laser point determination module is configured to determine whether the pixel point corresponding to each laser point is in the dynamic region, respectively, and wherein the dynamic point preliminary screening module is configured to use a certain laser point as the potential dynamic point in response to the pixel point corresponding to the certain laser point being in the dynamic region.
  15. 15. The system for removing dynamic points of an autonomous mobile platform of claim 14, wherein said clustering module comprises a point cloud clustering module, a distance calculation module and a dynamic point removal module communicatively connected to each other, wherein said point cloud clustering module is configured to cluster the potential dynamic points corresponding to different dynamic regions to obtain different point clouds, respectively, wherein said distance calculation module is configured to calculate an average distance between each of the point clouds and the camera according to a depth value in the current frame point cloud data, and wherein said dynamic point removal module is configured to remove the potential dynamic points within the point cloud closer to the camera from the current frame point cloud data.
  16. 16. An electronic device, comprising: A processor for executing program instructions, and A memory, wherein the memory is configured to hold program instructions executable by the processor to perform all or part of the steps in a de-dynamic point method for an autonomous mobile platform, wherein the de-dynamic point method for an autonomous mobile platform comprises the steps of: S100, acquiring image information and point cloud information which are acquired by a camera and a radar configured on the autonomous mobile platform at the same time in a current frame time stamp and a previous frame time stamp respectively to correspondingly acquire current frame image data, previous frame image data and current frame point cloud data, wherein the sampling frequency of the camera is the same as and synchronous with the detection frequency of the radar, and the current frame time stamp and the previous frame time stamp are both time stamps of the radar; S200, processing characteristic points of the current frame image data and the previous frame image data according to the relative pose relation of the autonomous mobile platform between the current frame timestamp and the previous frame timestamp to determine a dynamic region in the current frame image data, wherein a conversion matrix of corresponding characteristic points between the current frame image data and the previous frame image data is calculated by taking the relative pose relation as an initial value, and then transmission conversion is carried out on the previous frame image data and the current frame image data according to the conversion matrix to obtain the light intensity variation of each pixel in the current frame image data after transmission conversion, and a pixel region with the light intensity variation larger than the light intensity threshold is determined as a continuous dynamic region in the current frame image data by comparing the light intensity variation with the light intensity threshold; S300, projecting the current frame point cloud data to the image plane of the camera to take the point cloud projected into the dynamic region of the current frame image data as potential dynamic point, and And S400, clustering the potential dynamic points to obtain different point clouds, calculating the average distance between each point cloud and the camera according to the depth values of the different point clouds under the camera coordinate system, distinguishing the point cloud with a smaller distance from the camera from the point cloud with a larger distance from the camera, and removing the potential dynamic points in the point cloud with a smaller distance from the camera.

Description

Method for removing dynamic point of autonomous mobile platform, system and equipment thereof Technical Field The invention relates to the technical field of autonomous mobile platforms, in particular to a method for removing dynamic points of an autonomous mobile platform, a system and equipment thereof. Background Currently, with the development of artificial intelligence technology, more and more companies and college teams focus on the research of autonomous mobile platforms. The autonomous mobile platform generally refers to a vehicle with a plurality of sensors, and can realize autonomous sensing and movement, and then complete corresponding work, such as a patrol car or a sweeping robot, through the carried task module. Although the application scenario of the autonomous mobile platform almost relates to the living aspects of people, and the autonomous mobile platform has been developed to some extent, no solution is available for being commonly used in various scenarios at present. In particular, the autonomous mobile platform needs to use the map of the current environment to determine the pose of the autonomous mobile platform in the current environment in the autonomous navigation process, and under the condition that the positioning algorithm of the autonomous mobile platform highly depends on the map, the construction precision of the map determines the positioning precision, and the dynamic point (or dynamic obstacle) appears in the map and not only affects the positioning precision, but also affects the patterning precision in the patterning process, so that the removal of the dynamic point (or dynamic obstacle) is very necessary. Existing dynamic point (or dynamic obstacle) detection schemes are typically based on a rasterized map, filtered in the time domain. For example, cai Zixing et al propose in "real-time detection of dynamic obstacle based on lidar" that the environment is divided into grid maps, if three continuous obstacles occur in the same map grid, i.e. a certain grid is in an occupied state at three continuous frame times, then the obstacles in these grids are all static obstacles, if two continuous obstacles occur in the same map grid, i.e. a certain grid is in an occupied state at two continuous frame times, then the obstacles in these grids are all potential dynamic obstacles, then the state of eight surrounding grids is used to determine whether the potential dynamic obstacle is a static obstacle, and if the obstacle occurs only once in the same map grid, i.e. a certain grid is in an occupied state at only one frame time, then the obstacles in these grids are dynamic obstacles. However, the defects of the existing dynamic obstacle detection scheme are quite obvious, on one hand, the existing scheme can judge whether the dynamic obstacle is the dynamic obstacle or not only by requiring at least three continuous frames of point cloud data, but also can remove the dynamic points of the middle frame by requiring three continuous frames of data, and can not really meet the real-time requirement of the building of the graph, on the other hand, the existing scheme can register any two frames of point clouds in the composition process, if the dynamic points exist in the registered point clouds, the registration precision is obviously reduced, and the registration is unavoidable during the registration, so that the application of the existing dynamic obstacle detection method in the composition can not completely remove the influence of the dynamic points. Disclosure of Invention The invention has the advantages that the dynamic point removing method for the autonomous mobile platform, the system and the equipment thereof can eliminate the influence of dynamic point-to-point cloud registration, are beneficial to precisely removing the dynamic points in the point cloud information, and are convenient for improving the drawing precision. Another advantage of the present invention is to provide a method for de-dynamic point of an autonomous mobile platform, and a system and apparatus thereof, wherein in an embodiment of the present invention, the method for de-dynamic point of an autonomous mobile platform can use a camera to compensate for some inherent drawbacks of radar, such as discretized sampling, inability to match precisely, etc. Another advantage of the present invention is to provide a method for de-dynamic point of an autonomous mobile platform, and a system and a device thereof, wherein in an embodiment of the present invention, the method for de-dynamic point of an autonomous mobile platform can combine a camera and a radar to achieve a better de-dynamic point effect. Another advantage of the present invention is to provide a method for removing dynamic points of an autonomous mobile platform, and a system and a device thereof, wherein in an embodiment of the present invention, the method for removing dynamic points of an autonomous mobile platform can remove dynamic points