Search

CN-116385505-B - Data processing method, device, system and storage medium

CN116385505BCN 116385505 BCN116385505 BCN 116385505BCN-116385505-B

Abstract

The invention discloses a data processing method, a device, a system and a storage medium. The method comprises the steps of determining characteristic points in first point cloud data and characteristic points in second point cloud data, wherein the first point cloud data and the second point cloud data are used for representing different parts of the same object, conducting characteristic matching on the first point cloud data and the second point cloud data to determine characteristic points, meeting characteristic matching conditions, between the first point cloud data and the second point cloud data, to form a plurality of characteristic point pairs, determining a transformation matrix, meeting adjacent conditions, of space distances between the characteristic points in the characteristic point pairs, for one or more characteristic point pairs, and conducting coordinate transformation on one or more characteristic point pairs in the characteristic point pairs through the transformation matrix to register the first point cloud data with the second point cloud data. According to the data processing method provided by the embodiment of the invention, the accuracy and stability of the registration of the object surface features can be improved.

Inventors

  • WANG BIN
  • YU JINGMING
  • FENG XIAODUAN
  • PAN PAN
  • JIN RONG

Assignees

  • 阿里巴巴集团控股有限公司

Dates

Publication Date
20260512
Application Date
20171020

Claims (14)

  1. 1. A data processing method, comprising: according to a plurality of shooting visual angles, carrying out three-dimensional scanning on an object in a real scene to obtain a plurality of point data of the object; Constructing a plurality of point cloud data of the object under a plurality of shooting angles based on the plurality of point data, wherein the plurality of point cloud data comprises point cloud data of at least two coordinate systems; And carrying out unified coordinate system processing on the point cloud data of the at least two coordinate systems to obtain a three-dimensional point cloud data model of the object, wherein the three-dimensional point cloud data model is obtained by carrying out coordinate conversion on the point cloud data of the at least two coordinate systems according to a transformation matrix corresponding to the point cloud data of the at least two coordinate systems, the transformation matrix is a coordinate transformation matrix corresponding to the minimum value of the space distance of a plurality of feature point pairs contained in the point cloud data, the minimum value is determined by repeatedly executing the value of the precision control parameter within the value range of the precision control parameter and calculating the space distance by utilizing an evaluation model, the plurality of feature point pairs are determined based on geometric structural features and texture features of feature points in a polygonal patch corresponding to the point cloud data of each coordinate system, the geometric structural features at least comprise normal vectors and curvatures of sampling points in the polygonal patch, and the texture features at least comprise brightness and gray scale of the sampling points.
  2. 2. The method of claim 1, wherein the performing unified coordinate system processing on the point cloud data of the at least two coordinate systems to obtain a three-dimensional point cloud data model of the object includes: And carrying out coordinate transformation on the point cloud data of the at least two coordinate systems according to the unified coordinate system to obtain a three-dimensional point cloud data model of the object.
  3. 3. The method of claim 2, wherein the unified coordinate system corresponds to a transformation matrix, the transformation matrix includes a rotation component and a translation component, the transforming the point cloud data of the at least two coordinate systems according to the unified coordinate system to obtain a three-dimensional point cloud data model of the object, the method includes: according to the rotation component and the translation component, carrying out coordinate transformation on the point cloud data of the at least two coordinate systems to obtain a three-dimensional point cloud data model of the object; The rotation component is used for representing the rotation relation between the point cloud data of each two coordinate systems in the point cloud data of the at least two coordinate systems, and the translation component is used for representing the translation relation between the point cloud data of each two coordinate systems in the point cloud data of the at least two coordinate systems.
  4. 4. A method according to any of claims 1-3, wherein the point cloud data of each of the at least two coordinate systems corresponds to a polygonal patch model comprising a plurality of polygonal patches comprising point data, the unified coordinate system corresponding to a transformation matrix; The step of carrying out unified coordinate system processing on the point cloud data of the at least two coordinate systems to obtain a three-dimensional point cloud data model of the object comprises the following steps: Respectively taking a polygonal patch model corresponding to the point cloud data of each coordinate system as a patch to be processed; And carrying out coordinate transformation on the point data of each to-be-processed patch in the plurality of to-be-processed patches of each polygonal patch model through the transformation matrix to obtain a three-dimensional point cloud data model of the object.
  5. 5. A method according to any one of claims 1-3, wherein said performing unified coordinate system processing on the point cloud data of the at least two coordinate systems to obtain a three-dimensional point cloud data model of the object comprises: respectively taking a polygonal surface patch model corresponding to the point cloud data of each coordinate system as a surface patch to be processed, and obtaining a plurality of surface patches to be processed of the object, wherein the plurality of surface patches to be processed comprise a first surface patch to be processed and a second surface patch to be processed; Determining characteristic points in the first to-be-processed patch and determining characteristic points in the second to-be-processed patch; respectively carrying out feature matching on the surface features of the feature points in the first to-be-processed surface patch and the surface features of the feature points in the second to-be-processed surface patch to obtain feature point pairs, which accord with feature matching conditions, of the first to-be-processed surface patch and the second to-be-processed surface patch; Removing the error characteristic point pairs between the first to-be-processed surface piece and the second to-be-processed surface piece from the characteristic point pairs which accord with the characteristic matching conditions of the first to-be-processed surface piece and the second to-be-processed surface piece, and obtaining characteristic point matching results of the first to-be-processed surface piece and the second to-be-processed surface piece; And generating a three-dimensional point cloud data model of the object according to the characteristic point matching result of the first to-be-processed surface patch and the second to-be-processed surface patch.
  6. 6. The method of claim 5, wherein the determining the feature points in the first patch to be processed and the feature points in the second patch to be processed comprise: extracting the surface characteristics of an object from the first to-be-processed surface sheet, and extracting the surface characteristics of the object from the second to-be-processed surface sheet; Determining feature points in the first to-be-processed patch according to the extracted surface features of the first to-be-processed patch, and determining feature points in the second to-be-processed patch according to the extracted surface features of the second to-be-processed patch.
  7. 7. The method of claim 5, wherein the pairs of feature points comprise pairs of false feature points and pairs of valid feature points; The step of removing the error feature point pair between the first to-be-processed surface patch and the second to-be-processed surface patch from the feature point pair which accords with the feature matching condition of the first to-be-processed surface patch and the second to-be-processed surface patch to obtain the feature point matching result of the first to-be-processed surface patch and the second to-be-processed surface patch, comprises the following steps: Constructing an evaluation model based on the spatial distance and the precision control parameter; Calculating a space distance between a matched characteristic point pair between the first to-be-processed surface patch and the second to-be-processed surface patch by the evaluation model, wherein the characteristic point pair accords with a characteristic matching condition; Determining a characteristic point pair corresponding to the space distance as an effective characteristic point pair in the characteristic point pairs, which accord with the characteristic matching condition, of the first to-be-processed surface sheet and the second to-be-processed surface sheet; And removing error characteristic point pairs outside the effective characteristic point pairs from the characteristic point pairs, which accord with the characteristic matching conditions, of the first to-be-processed surface sheet and the second to-be-processed surface sheet to obtain characteristic point matching results of the first to-be-processed surface sheet and the second to-be-processed surface sheet.
  8. 8. The method of claim 5, wherein the method further comprises: reconstructing the point cloud data of each coordinate system through a three-dimensional reconstruction technology to obtain a polygonal patch model corresponding to the point cloud data of each coordinate system.
  9. 9. The method of claim 5, wherein the patch to be processed comprises point cloud data; The extracting the surface features of the object from the first to-be-processed surface sheet and the extracting the surface features of the object from the second to-be-processed surface sheet includes: Determining the point cloud data in the first to-be-processed patch and the point cloud data in the second to-be-processed patch as a point set of the surface features of the object; And analyzing the point set of the surface features of the object to obtain the surface features of the first surface piece to be processed and the surface features of the second surface piece to be processed.
  10. 10. A method according to any one of claims 1-3, wherein the three-dimensionally scanning an object in a real scene according to a plurality of photographing angles to obtain a plurality of point data of the object, comprises projecting structured light to the object according to a plurality of photographing angles; And acquiring feedback images of the shooting visual angles to obtain a plurality of point data of the object.
  11. 11. A method according to any one of claims 1-3, wherein the plurality of point data includes point data for each of the plurality of shooting perspectives; The constructing, based on the plurality of point data, a plurality of point cloud data of the object under a plurality of shooting angles, includes: Determining a data set of point data of each of the plurality of shooting angles as point cloud data of the each shooting angle; and determining the point cloud data of each shooting view angle as the point cloud data of a coordinate system.
  12. 12. A data processing method, comprising: Three-dimensional scanning is carried out on an object in a real scene through a virtual reality technology or an augmented reality technology, so that a plurality of point data of the object at each shooting view angle in a plurality of shooting view angles are obtained; Constructing a plurality of point cloud data of the object under a plurality of shooting angles based on the plurality of point data, wherein the plurality of point cloud data comprises point cloud data of at least two coordinate systems; And carrying out unified coordinate system processing on the point cloud data of the at least two coordinate systems to obtain a three-dimensional point cloud data model of the virtual reality technology or the augmented reality technology of the object, wherein the three-dimensional point cloud data model is obtained by carrying out coordinate conversion on the point cloud data of the at least two coordinate systems according to a transformation matrix corresponding to the point cloud data of the at least two coordinate systems, the transformation matrix is a coordinate transformation matrix corresponding to the minimum value of the space distance of a plurality of feature point pairs in the point cloud data, the minimum value is determined by repeatedly executing the value of the precision control parameter in the value range of the precision control parameter and calculating the space distance by utilizing an evaluation model, the plurality of feature point pairs are determined based on geometric structure features and texture features of feature points in a polygonal patch corresponding to the point cloud data of each coordinate system, the geometric structure features at least comprise normal vectors and curvatures of sampling points in the polygonal patch, and the texture features at least comprise brightness and gray scale of the sampling points.
  13. 13. A data processing apparatus includes a memory and a processor, The memory is used for storing a computer program; The processor being configured to execute a computer program stored in the memory, which when run causes the processor to perform the steps of the data processing method according to any one of claims 1 to 11 or the steps of the data processing method according to claim 12.
  14. 14. A computer-readable storage medium comprising instructions which, when run on a computer, cause the computer to perform the steps of the data processing method of any one of claims 1 to 11 or the steps of the data processing method of claim 12.

Description

Data processing method, device, system and storage medium Technical Field The present invention relates to the field of computers, and in particular, to a data processing method, apparatus, system, and storage medium. Background Whether in the Virtual Reality (VR) field or the augmented Reality (Augmented Reality, AR) field, the formation of a complete three-dimensional model after registration by an acquired three-dimensional (Three Dimensional, 3D) single-view point cloud is a key step in 3D display, and the collection of point data on the outer surface of an object is called point cloud. Since the 3D single view point cloud can only feed back three-dimensional object information under the view angle, if the three-dimensional information of the complete or full view angle object is to be obtained, a plurality of 3D single view point clouds need to be registered, which is called point cloud registration. Registration by object surface features is often limited by the accuracy of feature matching and is severely affected by noise with poor stability. Disclosure of Invention The embodiment of the invention provides a data processing method, a device, a system and a storage medium, which can improve the accuracy and stability of object surface feature registration. According to an aspect of an embodiment of the present invention, there is provided a data processing method, including: Determining characteristic points in first point cloud data and characteristic points in second point cloud data, wherein the first point cloud data and the second point cloud data are used for representing different parts of the same object; performing feature matching on the first point cloud data and the second point cloud data to determine feature points, which accord with feature matching conditions, between the first point cloud data and the second point cloud data to form a plurality of feature point pairs; For one or more of the plurality of feature point pairs, determining a transformation matrix in which the spatial distances between the feature points in the feature point pairs meet the proximity condition; One or more of the plurality of feature point pairs is transformed by the transformation matrix to register the first point cloud data with the second point cloud data. According to another aspect of an embodiment of the present invention, there is provided a data processing apparatus including: The characteristic point acquisition module is used for determining characteristic points in the first point cloud data and characteristic points in the second point cloud data, wherein the first point cloud data and the second point cloud data are used for representing different parts of the same object; The feature matching module is used for carrying out feature matching on the first point cloud data and the second point cloud data so as to determine feature points between the first point cloud data and the second point cloud data, which accord with feature matching conditions, and a plurality of feature point pairs are formed; the characteristic point pair screening module is used for determining a transformation matrix of which the spatial distance between the characteristic points in the characteristic point pairs accords with the adjacent condition for one or more characteristic point pairs in the characteristic point pairs; and the data registration module is used for carrying out coordinate transformation on one or more characteristic point pairs in the plurality of characteristic point pairs through the transformation matrix so as to register the first point cloud data with the second point cloud data. According to yet another aspect of an embodiment of the present invention, there is provided a data processing system including a memory for storing a program and a processor for reading executable program code stored in the memory to perform the above-described data processing method. According to still another aspect of the embodiments of the present invention, there is provided a computer-readable storage medium having stored therein instructions which, when executed on a computer, cause the computer to perform the data processing method of the above aspects. According to the data processing method, the device, the system and the storage medium, accuracy and stability of registration of the surface features of the object can be improved. According to another aspect of the embodiment of the present invention, there is provided a data processing method, including: according to a plurality of shooting visual angles, carrying out three-dimensional scanning on an object in a real scene to obtain a plurality of point data of the object; Constructing a plurality of point cloud data of the object under a plurality of shooting angles based on the plurality of point data, wherein the plurality of point cloud data comprises point cloud data of at least two coordinate systems; and carrying out unified coordinate system proces