KR-20260067260-A - METHOD FOR SURFACE RECONSTRUCTION VIA DEPTH MAP RENDERING AND COLOR MAPPING
Abstract
The present invention relates to a method for reconstructing a surface through depth map rendering and color mapping. A method for reconstructing a surface through depth map rendering and color mapping according to one aspect of the present invention relates to a method performed by an electronic device, comprising the steps of: rendering depth maps for camera positions by capturing images at camera positions on the surface of a virtual sphere based on the center of point cloud data; generating two-dimensional RGB images by color mapping the values for the depth of each pixel of each depth map; and reconstructing the surface by inputting the RGB images into a pre-configured neural network model.
Inventors
- 박진선
- 박재형
- 박현수
Assignees
- 부산대학교 산학협력단
Dates
- Publication Date
- 20260512
- Application Date
- 20241206
- Priority Date
- 20241105
Claims (4)
- It relates to a method performed by an electronic device, A step of rendering a depth map for each camera position as it is captured from camera positions on the surface of a virtual sphere based on the center of the point cloud data; A step of generating 2D RGB images by color mapping the pixel-wise depth values of each depth map; and A method for reconstructing a surface through depth map rendering and color mapping, comprising the step of inputting the above RGB images into a preset neural network model to reconstruct the surface.
- In paragraph 1, The step of rendering the depth map above Calculating a predetermined number of camera positions at equal intervals on the surface of a virtual sphere based on the center of the point cloud data, adjusting a predetermined proportion of the camera positions among the multiple camera positions to a point closer than the radius of the sphere and adjusting the remaining camera positions to a point further than the radius of the sphere, thereby capturing the point cloud data and rendering a depth map at different angles and distances. Surface reconstruction method using depth map rendering and color mapping.
- In paragraph 1, The step of rendering the depth map above Rendering a depth map by projecting rays from each camera position toward the point cloud data and selecting the depth of the point closest to the camera position among the points intersected by each ray. Surface reconstruction method using depth map rendering and color mapping.
- In paragraph 1, In the step after rendering the depth map and before generating the two-dimensional RGB images, It further includes a step of normalizing the values for pixel-wise depth of each depth map, and The step of generating the above two-dimensional RGB images is Converting each depth map into a 2D RGB image by applying normalized values to jet color mapping and converting them to RGB values. Surface reconstruction method using depth map rendering and color mapping.
Description
Method for Surface Reconstruction via Depth Map Rendering and Color Mapping The present invention relates to a technique for reconstructing a surface from point cloud data. Point cloud data acquired from devices such as LiDAR is discrete, so there are limitations in identifying the actual surface depending on the resolution. For example, even for just two points, there are many possible surface scenarios that can be derived depending on the curvature of the connecting line, making it difficult to determine which of the many cases corresponds to the actual surface. Accordingly, a technique for reconstructing a surface from point cloud data, a technique for generating a mesh composed of triangular faces from points, and a technique for estimating a surface based on learning are disclosed. Traditional surface reconstruction techniques for generating meshes have the advantage of being fast and capable of processing various point cloud inputs, but they suffer from a problem where their surface reconstruction capability drops significantly when the point density is sparse. Furthermore, while existing learning-based surface reconstruction techniques demonstrate good performance for inputs with sparse points, they suffer from performance degradation for inputs that are out of distribution shape or for non-uniform points. In addition, existing technologies have the disadvantage that the amount of information utilized is limited because they simply use the coordinate information of each point when reconstructing the surface. FIG. 1 is a conceptual diagram of a surface reconstruction method through depth map rendering and color mapping according to an embodiment of the present invention. FIG. 2 is a flowchart of a surface reconstruction method through depth map rendering and color mapping according to an embodiment of the present invention. FIG. 3 is an example diagram illustrating the camera position during rendering in a surface reconstruction method using depth map rendering and color mapping according to an embodiment of the present invention. FIG. 4 is an example diagram illustrating points reflected during rendering in a surface reconstruction method using depth map rendering and color mapping according to an embodiment of the present invention. Figure 5 shows the result of surface reconstruction and the actual value according to an embodiment of the present invention. The advantages and features of the present invention, and the methods for achieving them, will become clear by referring to the embodiments described below in detail together with the accompanying drawings. However, the present invention is not limited to the embodiments disclosed below but may be implemented in various different forms. These embodiments are provided merely to ensure that the disclosure of the present invention is complete and to fully inform those skilled in the art of the scope of the invention, and the present invention is defined only by the claims. Meanwhile, the terms used in this specification are for describing the embodiments and are not intended to limit the present invention. In this specification, the singular form includes the plural form unless specifically stated otherwise in the text. The present invention relates to a technique for reconstructing a surface from point cloud data. Referring to FIG. 1, the present invention may receive point cloud data (1001), perform rendering and color mapping to generate two-dimensional RGB images, generate two-dimensional RGB images (1002), and input the two-dimensional RGB images (1002) into a preset surface reconstruction model to obtain a surface-reconstructed 3D model (1003). The present invention is characterized by the technical feature of reconstructing a surface with high performance without prior training by transmitting not only the coordinate information of point data but also various other information to a surface reconstruction model through 2D RGB images. Referring to FIG. 2, a surface reconstruction method through depth map rendering and color mapping according to one aspect of the present invention may be configured to include the steps of: rendering depth maps for camera positions by taking photos at camera positions on the surface of a virtual sphere based on the center of point cloud data (S10); normalizing the values for the depth of each pixel of each depth map (S20); generating two-dimensional RGB images by color mapping the values for the depth of each pixel of each depth map (S30); and reconstructing the surface by inputting the RGB images into a pre-configured neural network model (S40). The step of rendering depth maps (S10) may be to calculate a number of camera positions that are set at equal intervals on the surface of a virtual sphere as shown in FIG. 3 based on the center of the point cloud data, and to render multiple depth maps for each camera position by capturing the point cloud data at each camera position. At this time, among multiple camera p