Search

CN-121999133-A - Three-dimensional implicit field reconstruction method based on Manhattan constraint stretching and rotation primitive fusion

CN121999133ACN 121999133 ACN121999133 ACN 121999133ACN-121999133-A

Abstract

The invention discloses a three-dimensional implicit field reconstruction method based on Manhattan constraint stretching rotation primitive fusion, and belongs to the technical fields of computer graphics, three-dimensional modeling and reverse engineering. The method comprises the steps of normalizing voxel data and space sampling data of a target three-dimensional object, sending the normalized voxel data and space sampling data into a feature coding network to obtain deep features representing geometric distribution, restraining a stretching direction and a rotating shaft direction to be aligned with three main shafts based on Manhattan assumption, predicting the centers, the stretching length or the rotating range of stretching/rotating primitives and corresponding sketch implicit expression by the features in parallel, enabling all sketch implicit fields to be slightly expanded into three-dimensional SDF along the main shafts, integrating the three-dimensional SDF into a complete implicit field through Boolean operation, multi-scale fusion or weighted combination, and obtaining the three-dimensional surface of the target object through an equivalent surface extraction algorithm to finish reconstruction. The invention remarkably improves the robustness, structural consistency and noise resistance of direction estimation, and is superior to the prior art in the high-accuracy reconstruction of regular artificial objects.

Inventors

  • LI NANNAN
  • REN KEXU
  • YANG DAPENG
  • Xu Chuanhang
  • GAO QIANYI
  • LI WENHAO
  • ZHOU JUN

Assignees

  • 大连海事大学

Dates

Publication Date
20260508
Application Date
20260119

Claims (10)

  1. 1. A three-dimensional implicit field reconstruction method based on Manhattan constraint stretching rotation primitive fusion is characterized by comprising the following steps: Acquiring voxel data and space sampling data of a target three-dimensional object, and preprocessing the voxel data and the space sampling data; The preprocessed data is sent into a feature coding network to carry out space feature coding, and deep feature representation representing the geometrical distribution of the target is obtained; constraining the poses of the stretched primitive and the rotated primitive to be and according to the Manhattan world assumption The alignment direction of the three main shafts; Based on the deep feature representation, predicting the central point coordinates, the extension length and the two-dimensional sketch implicit representation of the stretching primitive, and predicting the central point coordinates, the rotation range and the two-dimensional sketch implicit representation of the rotation primitive; implicitly expanding the two-dimensional sketch representation of each primitive into a three-dimensional SDF along the corresponding principal axis direction; based on the consistency constraint of the stretching primitive and the rotating primitive in the space position, the scale range and the generating direction, adopting a unified fusion algorithm to convert the two-dimensional SDF of the stretching primitive and the rotating primitive into a three-dimensional SDF and integrating the three-dimensional SDF into a continuous implicit field representation of a complete object; And extracting an equivalent surface from the continuous implicit field representation of the complete object by adopting an equivalent surface extraction algorithm to obtain a three-dimensional surface model of the target object, and completing three-dimensional geometric reconstruction.
  2. 2. The method of claim 1, wherein preprocessing the voxel data and the spatially sampled data comprises: and carrying out coordinate scaling, centering, noise filtering and normalization on the voxel data and the space sampling data.
  3. 3. The method of claim 1, wherein the unified fusion algorithm comprises Boolean operation, multi-scale fusion or weighted combination.
  4. 4. A method according to claim 3, wherein converting the two-dimensional SDF of the stretched primitive and the rotated primitive into a three-dimensional SDF and integrating into a continuous implicit field representation of the complete object using a unified fusion algorithm comprises: in the primitive local coordinate system, height variables are defined To stretch the height or rotation range, the axial variation Representing coordinates of the sampling point in the stretching direction or the rotation axis direction, and describing two areas based on this: Region(s) Corresponding height range A space in the stretching element for limiting the volume between the upper plane and the lower plane of the stretching element or the effective axial height interval of the rotating element; Region(s) The drawing element is formed by infinitely extending a two-dimensional sketch implicit field into a three-dimensional space, the drawing element corresponds to an infinite column in the height direction, and the rotating element corresponds to an infinite revolving body which is obtained by rotating around the local axis of the rotating element; For any point If the dimension reduction coordinates thereof meet Then consider that the point belongs to the area ; Sampling point is at And (3) with The distance to the final primitive surface comes from the following sources: If the sampling point is within the height range but outside the sketch, i.e The distance is determined by the sketch distance, wherein the upper corner mark Representing the complement; if the sampling point is inside the sketch but the height is out of the effective range, namely The distance is determined by the signed distance from the point to the upper and lower planes; If the sampling point is neither in the sketch nor in the elevation range, i.e. The sketch distance of the point and the distance from the point to the upper plane and the lower plane need to be considered simultaneously; if the sampling point is in the boundary area of the sketch and the height, i.e. The larger of the distances is compared between the sketch distance of the point and the distance from the point to the upper and lower planes; After unifying the above conditions, the first The three-dimensional implicit field of individual primitives is represented as: ; Wherein, the A signed distance representing a point to height boundary; For a tensile element, this value can be considered as the normal distance of the point to the upper and lower planes, for a rotational element, this value corresponds to the offset of the point to the body of revolution in the direction of extension, as described above The above piecewise expression can be obtained by, similar to implicit field derivation of a cylinder The operation and the two norms are further arranged into a compact form, and the result is that: ; To three-dimensional implicit field Warp yarn The function maps to an occupancy value: ; Wherein, the To adjust the temperature coefficient of the function steepness.
  5. 5. The method of claim 1, wherein the feature encoding network is a three-dimensional convolutional neural network.
  6. 6. The method of claim 5, wherein predicting an implicit representation of a center point coordinate, an extension length, and a stretched sketch outline shape of a stretched primitive based on the deep feature representation, predicting an implicit representation of a center point coordinate, a rotation range, and a rotated sketch outline shape of a rotated primitive comprises: The deep features are used as input, and the parameters of the stretching box and the parameters of the rotating box are output in parallel through a fully-connected decoder The plane determines the plane of the stretching sketch, which The shaft is in the stretching direction, and the height of the stretching box is equal to the stretching operation height Twice the local coordinate system in the rotating box The plane determines the plane of the rotation sketch, which The shaft is a rotation shaft, and the height of the rotation box is equal to the rotation operation range Twice as many as (2); carrying out local coordinate transformation and two-dimensional dimension reduction mapping on three-dimensional sampling points in the stretching box and the rotating box, so that the three-dimensional sampling points are respectively projected to a corresponding stretching sketch plane or a corresponding rotating sketch plane; The projected two-dimensional coordinate points are spliced with the deep feature representations and are sent to respective sketch prediction networks, and the sketch prediction networks calculate signed distances from each sampling point to sketch outlines, wherein the distances from the points positioned in the sketch are negative, and the distances from the points positioned outside the sketch are positive; Will be the first The individual sketch prediction network is an implicit function Then formalize as: ; Wherein, the Is the first Two-dimensional coordinates of sampling points in each primitive after linear transformation and dimension reduction mapping are obtained I.e. an implicit representation of the stretched or rotated sketch outline shape.
  7. 7. The method of claim 6, wherein each sketch-prediction network comprises Full-connecting layers are adopted between layers Activating the function, the last layer clamps the output distance to 。
  8. 8. The method of claim 6, wherein two-dimensional dimension-reduction mapping of three-dimensional sampling points within the stretch box and the rotation box comprises: Stretching of local coordinate systems within boxes The plane is sketch plane, directly taking the front two-dimensional of local coordinates Corresponding to the edge part The shaft vertically projects the sampling point to a sketch plane; rotating the box sketch by wrapping around its local coordinate system And generating an axis, adopting polar coordinate type rotation mapping and taking two-dimensional coordinates to reduce the dimension.
  9. 9. The method of claim 1, wherein implicitly expanding the two-dimensional sketch representation of each primitive into a three-dimensional SDF along the corresponding principal axis direction comprises: Based on the two-dimensional sketch implicit field, the sampling micro-conversion method regards both stretching and rotating primitives as continuous continuation of the two-dimensional sketch distance field along the linear direction or the rotating direction, and generates a corresponding three-dimensional implicit field.
  10. 10. The method of claim 1, wherein extracting an iso-surface from the complete implicit field using an iso-surface extraction algorithm to obtain a three-dimensional surface model of the target object, and performing a three-dimensional geometric reconstruction comprises: By using Fusing the occupation values of all primitives: ; Wherein, the For modulating the relative weights of the different primitives, Is an occupancy function of the final reconstructed shape.

Description

Three-dimensional implicit field reconstruction method based on Manhattan constraint stretching and rotation primitive fusion Technical Field The invention belongs to the technical fields of computer graphics, three-dimensional modeling and reverse engineering, and particularly relates to a three-dimensional implicit field reconstruction method based on fusion of Manhattan constraint stretching rotation primitives. Background With the continuous development of three-dimensional data acquisition technology, how to recover complete and accurate three-dimensional shapes from irregular geometric data such as point clouds, grids or depth maps is an important research direction in the fields of computer graphics and inverse geometric reconstruction. Traditional geometric reconstruction methods rely on explicit geometric processing flows, such as surface fitting, normal estimation, grid optimization, etc., and have limitations in processing complex structures or noise data. In recent years, three-dimensional reconstruction methods based on implicit fields (IMPLICIT FIELD) have received attention because of their strong continuous expression capability and high topology flexibility. The signed distance Function (SIGNED DISTANCE Function, SDF) serves as an important implicit field representation form, and the geometric shape is implicitly described by defining the signed distance from a space point to the surface of an object, so that a foundation is provided for realizing high-quality continuous three-dimensional reconstruction. To enhance the reconstruction capability of objects with regularly generated structures, primitive operations (PRIMITIVE OPERATIONS) are introduced into an implicit field reconstruction system as an efficient approach. The stretch elements (Extrusion Primitive) may be used to generate a linear extension structure from a two-dimensional profile, and the rotation elements (Revolve Primitive) may be used to generate a body of revolution from the profile curve. A large number of industrial components or everyday objects are made up of both of these types of operations. Furthermore, in a practical scenario, the principal structural directions of a large number of objects follow approximately orthogonal spatial laws, i.e. "Manhattan world hypothesis" (Manhattan-world Assumption), which considers that the principal faces, edges or axes of operation of the objects generally coincide with three orthogonal directions. The prior is introduced in the reconstruction, which can restrict the direction space to three principal axes, thereby simplifying the learning and prediction process. However, existing approaches generally lack a mechanism of stable constraint on the direction of structural generation of objects when combining primitive operations with implicit field reconstruction. Specifically, in the prior art, the stretching direction or the rotation axis direction is usually predicted in a continuous rotation space, the parameter regression is highly dependent on quaternion or three-dimensional vector representation, and is influenced by nonlinear spatial characteristics and symbol uniqueness, instability is easily generated in the direction prediction process, and therefore, the generation direction of the primitive is jumped or deviated. In the absence of an effective direction prior, implicit fields rely only on local data fitting, making it difficult to maintain overall geometric consistency in regions with explicit stretch or rotation generation rules. Especially when noise, missing or incomplete structural information exists in the input data, the problem of unstable direction can be further amplified, and finally, the reconstruction result is in morphological distortion, symmetry damage or axial deviation in a regular structural area. Therefore, how to stably and reliably constrain the primitive generation direction in the implicit field reconstruction process, so that the accurate expression of the rule generation structure is still maintained under the complex data condition, and the method becomes the key technical defect which is most difficult to solve in the prior art. In view of the above problems, a three-dimensional reconstruction method is needed, which can effectively introduce constraint of manhattan direction, stabilize the direction of a predicted primitive, and fuse stretching and rotating structure prior, so as to realize accurate, robust and high-geometric consistency three-dimensional reconstruction of an object with regular generation rules. Disclosure of Invention In view of the above, the invention provides a three-dimensional implicit field reconstruction method based on Manhattan constraint stretching rotation primitive fusion, which aims to solve the problems of inconsistent geometric reconstruction and poor robustness to noisy or incomplete input data caused by difficulty in identifying a structure generation rule, unstable rotation axis direction prediction and lac