CN-121999258-A - Three-dimensional terrain recognition method, device, equipment and computer readable storage medium
Abstract
The application discloses a three-dimensional terrain identification method, a three-dimensional terrain identification device, three-dimensional terrain identification equipment and a computer readable storage medium, and belongs to the technical field of computers. The method comprises the steps of obtaining a first depth image of a target area at a first moment, which is collected by a first image collection module, a first view image of the target area at the first moment, which is collected by a second image collection module, and first inertial data, which is collected by an inertial sensor, dynamically compensating the first depth image and the first view image based on the first inertial data to obtain a compensated second depth image and a compensated second view image, generating point cloud data of the target area according to the second depth image and the second view image, and extracting topographic features of the target area according to the point cloud data, wherein the topographic features are used for representing the topography of the target area. The accuracy of the topographic features of the target area extracted by the method is higher.
Inventors
- Request for anonymity
- Request for anonymity
Assignees
- 极壳科技(上海)有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20241104
Claims (13)
- 1. A method of three-dimensional terrain identification, the method comprising: Acquiring a first depth image of a target area at a first moment, which is acquired by a first image acquisition module, a first view image of the target area at the first moment, which is acquired by a second image acquisition module, and first inertial data, which is acquired by an inertial sensor, wherein the first depth image is used for indicating depth information of each pixel point of the target area at the first moment, the first view image is used for indicating color information of each pixel point of the target area at the first moment, and the first inertial data is used for indicating motion information of the first image acquisition module and the second image acquisition module at the first moment; dynamically compensating the first depth image and the first view image based on the first inertial data to obtain a compensated second depth image and a compensated second view image; Generating point cloud data of the target area according to the second depth image and the second view image; And extracting the topographic features of the target area according to the point cloud data, wherein the topographic features are used for representing the topography of the target area.
- 2. The method of claim 1, wherein the first inertial data comprises attitude data and acceleration; the dynamically compensating the first depth image and the first view image based on the first inertial data to obtain a compensated second depth image and a compensated second view image, including: Acquiring a third depth image and a third view image according to the gesture data, the first depth image and the first view image, wherein the third depth image and the third view image are positioned in a reference coordinate system; And according to the acceleration, performing height compensation on the third depth image to obtain the second depth image, and performing height compensation on the third view image to obtain the second view image.
- 3. The method of claim 2, wherein the acquiring a third depth image and a third field of view image from the pose data, the first depth image and the first field of view image comprises: processing the first depth image to obtain a reference depth image after denoising the first depth image; Processing the first view image to obtain a reference view image after denoising the first view image and extracting the characteristics; determining a rotation matrix according to the gesture data, wherein the rotation matrix is used for converting the reference depth image and the reference field image into the reference coordinate system; According to the rotation matrix, converting the reference depth image to obtain the third depth image; And converting the reference field of view image according to the rotation matrix to obtain the third field of view image.
- 4. A method according to claim 3, wherein said converting said reference depth image according to said rotation matrix to obtain said third depth image comprises: acquiring position information and depth information of each first pixel point in the reference depth image; updating the position information and the depth information of each first pixel point based on the rotation matrix to obtain the updated position information and depth information of each first pixel point; And generating the third depth image according to the updated position information and depth information of each first pixel point.
- 5. The method of claim 2, wherein the performing the height compensation on the third depth image according to the acceleration to obtain the second depth image includes: determining a height change value of the inertial sensor in a reference direction according to the acceleration; and carrying out height compensation on each pixel point in the third depth image according to the height change value to obtain the second depth image.
- 6. The method of any one of claims 1 to 5, wherein generating the point cloud data of the target area from the second depth image and the second field of view image comprises: determining three-dimensional coordinates of each pixel point in the second depth image according to the depth information of each pixel point in the second depth image and the internal reference of the first image acquisition module; Acquiring color information of each pixel point in the second view-field-dimension image; and generating point cloud data of the target area according to the three-dimensional coordinates of each pixel point in the second depth image and the color information of each pixel point in the second visual field dimension image.
- 7. The method of any one of claims 1 to 5, wherein the topographical feature comprises a topographical gradient; the extracting the topographic features of the target area according to the point cloud data comprises: performing plane fitting on the point cloud data to obtain a plane equation of the target area; Determining a normal vector of the target area according to the plane equation; and determining the included angle between the normal vector and the horizontal plane of the target area as the terrain gradient of the target area.
- 8. The method of any one of claims 1 to 5, wherein the topographical features include location information of an obstacle; the extracting the topographic features of the target area according to the point cloud data comprises: Dividing the point cloud data to obtain ground points and non-ground points; Performing cluster analysis on the non-ground points to obtain at least one group of non-ground points; For any set of non-ground points, determining target location information based on location information of the any set of non-ground points if the volume of the any set of non-ground points is greater than a volume threshold; And determining the target position information as position information of an obstacle included in the target area.
- 9. The method according to any one of claims 1 to 5, further comprising: And determining the terrain material of the target area according to the first view image.
- 10. A three-dimensional terrain identification device, the device comprising: The system comprises an acquisition module, a first image acquisition module and a second image acquisition module, wherein the acquisition module is used for acquiring a first depth image of a target area at a first moment, a first view image of the target area at the first moment and acquired by the second image acquisition module, and first inertial data acquired by an inertial sensor, the first depth image is used for indicating depth information of each pixel point of the target area at the first moment, the first view image is used for indicating color information of each pixel point of the target area at the first moment, and the first inertial data is used for indicating motion information of the first image acquisition module and the second image acquisition module at the first moment; The dynamic compensation module is used for dynamically compensating the first depth image and the first view image based on the first inertial data to obtain a compensated second depth image and a compensated second view image; The generation module is used for generating point cloud data of the target area according to the second depth image and the second view image; and the identification module is used for extracting the topographic features of the target area according to the point cloud data, wherein the topographic features are used for representing the topography of the target area.
- 11. A computer device comprising a processor and a memory, the memory having stored therein at least one program code, the at least one program code being loaded and executed by the processor to cause the computer device to implement the three-dimensional terrain identification method of any of claims 1 to 9.
- 12. A computer readable storage medium having stored therein at least one program code, the at least one program code being loaded and executed by a processor to cause a computer to implement the three-dimensional terrain identification method of any of claims 1 to 9.
- 13. A computer program product, characterized in that it has stored therein at least one computer instruction that is loaded and executed by a processor to cause the computer to implement the three-dimensional terrain recognition method according to any of claims 1 to 9.
Description
Three-dimensional terrain recognition method, device, equipment and computer readable storage medium Technical Field The embodiment of the application relates to the technical field of computers, in particular to a three-dimensional terrain identification method, a three-dimensional terrain identification device, three-dimensional terrain identification equipment and a computer readable storage medium. Background With the continuous development of computer technology, the robot industry is also in rapid development, and the application field of robots is more and more wide, for example, robots can be attached to the outside of users, so that the strength, agility and endurance of the users are enhanced. The wide application of robots in outdoor scenes such as rehabilitation, industrial safety and tourism makes the adaptability of robots to various terrains one of the key technologies of robots. Therefore, there is a need for a three-dimensional terrain recognition method to recognize terrain features to better control a robot according to the terrain features to adapt the robot to various terrains. Disclosure of Invention The embodiment of the application provides a three-dimensional terrain identification method, a three-dimensional terrain identification device, three-dimensional terrain identification equipment and a computer readable storage medium, which can be used for determining the terrain characteristics of a target area. The technical scheme is as follows: in one aspect, an embodiment of the present application provides a three-dimensional terrain identification method, including: Acquiring a first depth image of a target area at a first moment, which is acquired by a first image acquisition module, a first view image of the target area at the first moment, which is acquired by a second image acquisition module, and first inertial data, which is acquired by an inertial sensor, wherein the first depth image is used for indicating depth information of each pixel point of the target area at the first moment, the first view image is used for indicating color information of each pixel point of the target area at the first moment, and the first inertial data is used for indicating motion information of the first image acquisition module and the second image acquisition module at the first moment; dynamically compensating the first depth image and the first view image based on the first inertial data to obtain a compensated second depth image and a compensated second view image; Generating point cloud data of the target area according to the second depth image and the second view image; And extracting the topographic features of the target area according to the point cloud data, wherein the topographic features are used for representing the topography of the target area. In one possible implementation, the first inertial data includes attitude data and acceleration; the dynamically compensating the first depth image and the first view image based on the first inertial data to obtain a compensated second depth image and a compensated second view image, including: Acquiring a third depth image and a third view image according to the gesture data, the first depth image and the first view image, wherein the third depth image and the third view image are positioned in a reference coordinate system; And according to the acceleration, performing height compensation on the third depth image to obtain the second depth image, and performing height compensation on the third view image to obtain the second view image. In one possible implementation manner, the acquiring a third depth image and a third view image according to the gesture data, the first depth image and the first view image includes: processing the first depth image to obtain a reference depth image after denoising the first depth image; Processing the first view image to obtain a reference view image after denoising the first view image and extracting the characteristics; determining a rotation matrix according to the gesture data, wherein the rotation matrix is used for converting the reference depth image and the reference field image into the reference coordinate system; According to the rotation matrix, converting the reference depth image to obtain the third depth image; And converting the reference field of view image according to the rotation matrix to obtain the third field of view image. In a possible implementation manner, the converting the reference depth image according to the rotation matrix to obtain the third depth image includes: acquiring position information and depth information of each first pixel point in the reference depth image; updating the position information and the depth information of each first pixel point based on the rotation matrix to obtain the updated position information and depth information of each first pixel point; And generating the third depth image according to the updated position information and depth i