Search

KR-20260063511-A - METHOD FOR RECOGNIZING THE LOCATION OF 3D POINT CLOUD DATA ON A MAP

KR20260063511AKR 20260063511 AKR20260063511 AKR 20260063511AKR-20260063511-A

Abstract

A method for recognizing the map-based position of three-dimensional point cloud data is provided. A method for recognizing the map-based position of three-dimensional point cloud data according to one aspect of the present invention comprises: a step of collecting 3D point cloud data based on the position of an object detected by a 3D LiDAR sensor; a step of dividing the 3D point cloud data into a plurality of sector descriptors divided in the yaw direction when viewed from above; a step of generating a global descriptor by extracting feature information of each sector descriptor and synthesizing it; a step of extracting a Peak Orientation Index (POI) having a maximum value in each column of the global descriptor; a step of selecting a matching scan based on the difference between the peak orientation index of a query scan and the peak orientation index of each global descriptor in the database; and a step of recognizing the current position by point-matching the query scan and the matching scan.

Inventors

  • 김준식
  • 조성준

Assignees

  • 국방과학연구소

Dates

Publication Date
20260507
Application Date
20241030

Claims (8)

  1. A step of collecting 3D point cloud data based on the position of an object detected by a 3D LiDAR sensor; A step of dividing the 3D point cloud data into a plurality of sector descriptors divided in the yaw direction, when viewing the 3D point cloud data above; A step of generating a global descriptor by extracting and synthesizing characteristic information of each sector descriptor; A step of selecting a matching scan by comparing the query scan with each global descriptor of the database; and The method includes the step of recognizing the current position by point cloud matching the above query scan and the above matching scan, and The step of generating the above global descriptor is, Normalize the point cloud data of each of the above sector descriptors in the above direction, and A method for recognizing the map-side location of three-dimensional point cloud data, wherein each sector descriptor is stacked in the order of the above-mentioned direction to generate the above-mentioned global descriptor.
  2. In Article 1, The step of recognizing the current location mentioned above is, A method for recognizing the position on a map of 3D point cloud data, wherein the difference in the yaw direction angle between the query scan and the matching scan is assigned as an initial value and the query scan and the matching scan are point cloud aligned through an Iterative Closet Point (ICP) algorithm.
  3. In Article 1, The step of generating the above global descriptor is, Normalize the point cloud data of each of the above sector descriptors in the above direction, and The point cloud data of each of the above sector descriptors is input into an SPV-Conv (Sparse Point Voxel Convolution) network to extract feature information from the point cloud data, synthesize it, and compress it into a lightweight sector descriptor, and A method for recognizing the map-like position of three-dimensional point cloud data, wherein the above-mentioned lightweight sector descriptors are stacked in the above-mentioned yaw direction order to generate the above-mentioned global descriptor.
  4. In Paragraph 3, The above SPV-Conv network is, Using the ICP algorithm based on rotation and translation, point cloud data from query scans and point cloud data from positive scans are matched with corresponding points, and A method for recognizing the map-side location of 3D point cloud data, which learns to have similar feature information between corresponding points and learns to have different feature information between non-corresponding points.
  5. A step of collecting 3D point cloud data based on the position of an object detected by a 3D LiDAR sensor; A step of dividing the 3D point cloud data into a plurality of sector descriptors divided in the yaw direction, when viewing the 3D point cloud data above; A step of generating a global descriptor by extracting and synthesizing characteristic information of each sector descriptor; A step of extracting a Peak Orientation Index (POI) having the maximum value in each column of the above global descriptor; A step of selecting a matching scan based on the difference between the peak direction index of a query scan and the peak direction index of each global descriptor of the database; and A method for recognizing a location on a map of 3D point cloud data, comprising the step of recognizing a current location by point cloud matching a query scan and the matching scan.
  6. In Article 5, The step of selecting the above matching scan is, Identify the pattern of each global descriptor in the database based on the peak direction index having the maximum value in each column of the extracted global descriptors, and The index with the highest frequency among the difference values between patterns of each global descriptor of the above database is estimated as the angle difference in the above yaw direction, and Based on the peak direction index of the above query scan, the global descriptors of the above database are aligned by yaw direction angles and feature information is compared, and A method for recognizing the position on a map of 3D point cloud data, wherein the scan with the smallest difference in feature information is selected as the matching scan.
  7. In Article 5, The step of selecting the above matching scan is, Identify the pattern of each global descriptor in the database based on the peak direction index having the maximum value in each column of the extracted global descriptors, and The difference between patterns of each global descriptor of the above database is represented as a histogram, and A method for recognizing the position on a map of 3D point cloud data, wherein the scan with the largest variance calculated based on the y-axis data among the histograms expressed above is selected as the matching scan.
  8. In Article 5, The step of recognizing the current location mentioned above is, A method for recognizing the position on a map of 3D point cloud data, wherein the difference in the yaw direction angle between the query scan and the matching scan is assigned as an initial value and the query scan and the matching scan are point cloud aligned through an Iterative Closet Point (ICP) algorithm.

Description

Method for recognizing the location of 3D point cloud data on a map The present invention relates to a method for recognizing the position on a map of three-dimensional point cloud data. Autonomous driving can be classified into gradual stages ranging from non-automation to full automation, depending on the degree to which the system is involved in driving and the degree to which the driver controls the vehicle. Generally, the stages of autonomous driving are classified into six levels as defined by the International Society of Automotive Engineers. According to the six levels classified by the International Society of Automotive Engineers, Level 0 is non-automation, Level 1 is driver assistance, Level 2 is partial automation, Level 3 is conditional automation, Level 4 is highly automated, and Level 5 is fully automated. Autonomous driving is performed through the mechanisms of perception, localization, path planning, and control. As such, autonomous driving requires the ability to recognize a location on a map. Existing 3D LiDAR sensor-based position recognition models do not consider directional information when optimizing sensor data, so they cannot determine the initial directional value for point cloud matching and thus do not contribute to accurate position re-recognition. As a result, when revisiting a previously traveled route in reverse, there was a problem where the accuracy of location recognition decreased because the direction the sensor was facing changed even when driving through the same area. Models that consider directional information to improve recognition accuracy took a long time to estimate the direction, which resulted in a problem where the distance differed from previously visited paths. FIG. 1 is a block diagram of an autonomous driving device according to one embodiment of the present invention. FIG. 2 is a flowchart of a method for recognizing the location of three-dimensional point cloud data on a map according to an embodiment of the present invention. FIG. 3 is a top view of 3D point cloud data according to one embodiment of the present invention. FIG. 4 is a diagram showing the normalization of point cloud data of each sector descriptor in the yaw direction of a method for recognizing the position on a map of three-dimensional point cloud data according to an embodiment of the present invention. FIG. 5 is a diagram illustrating the generation of sector descriptors and global descriptors of a method for recognizing the location on a map of three-dimensional point cloud data according to an embodiment of the present invention. FIG. 6 is a diagram illustrating a method for learning feature information of a map-based position recognition method for three-dimensional point cloud data according to an embodiment of the present invention. FIG. 7 is a formula representing feature information learning of a method for recognizing the position on a map of three-dimensional point cloud data according to an embodiment of the present invention. FIG. 8 is a diagram illustrating a method for learning feature information of a map-based position recognition method for three-dimensional point cloud data according to an embodiment of the present invention. FIG. 9 is a formula representing feature information learning of a method for recognizing the position on a map of three-dimensional point cloud data according to an embodiment of the present invention. FIG. 10 is a diagram showing the peak direction index of a method for recognizing the position on a map of three-dimensional point cloud data according to one embodiment of the present invention. FIG. 11 is a diagram showing the estimation of a direction difference value of a map position recognition method for three-dimensional point cloud data according to an embodiment of the present invention. FIG. 12 is a diagram showing point cloud joining of a method for recognizing the position on a map of three-dimensional point cloud data according to an embodiment of the present invention. FIG. 13 is a table showing the position recognition accuracy of a method for recognizing the position of three-dimensional point cloud data on a map according to the prior art and an embodiment of the present invention. FIG. 14 is a diagram showing the movement path of an autonomous vehicle through a method for recognizing the position on a map of three-dimensional point cloud data according to an embodiment of the present invention. FIG. 15 is a table showing the direction estimation accuracy of a method for recognizing the position on a map of three-dimensional point cloud data according to the prior art and an embodiment of the present invention. FIGS. 16 and 17 are graphs showing the direction estimation accuracy of a method for recognizing the position on a map of three-dimensional point cloud data according to the prior art and an embodiment of the present invention. Hereinafter, preferred embodiments of the present invention will be described in detail with