CN-121982263-A - Multi-view texture synthesis method and device, electronic equipment and storage medium
Abstract
The utility model provides a multi-view texture synthesis method, device, electronic equipment and storage medium, through synthesizing Euclidean distance and sight contained angle at the vertex level and constructing vertex view weight, and utilize the barycenter coordinate of texture sampling point relative to the target triangle patch to interpolate the vertex weight into the view mixed weight of sampling point level in texture parameter domain, and then combine visibility to restrain, weight normalization and multi-view projection interpolation sampling to realize weighted fusion, thereby make the fusion weight change in the patch inside continuously, reduce view jump and the piece artifact of adjacent sampling point, promote continuity and uniformity of texture synthesis, and inhibit invalid view interference under shielding or view unfavorable condition simultaneously, and then obtain more stable, more natural final texture map and be convenient for the texture piece packing to derive.
Inventors
- YU TIAN
Assignees
- 深圳市其域创新科技有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260409
Claims (10)
- 1. A multi-view texture synthesis method, comprising: Acquiring a three-dimensional grid model to be textured, a multi-view image set and camera parameters corresponding to the multi-view image set, and determining vertex view weights of grid vertices of the three-dimensional grid model under each view based on the three-dimensional grid model and the camera parameters; Constructing a texture parameter domain of the three-dimensional grid model, and determining a texture sampling point set to be synthesized in the texture parameter domain; Determining a target triangle patch corresponding to any texture sampling point, calculating the barycentric coordinates of the texture sampling point in the target triangle patch, and interpolating the vertex view weights of three vertexes of the target triangle patch according to the barycentric coordinates to obtain the view mixing weight of the texture sampling point; According to the view mixing weight, mapping the texture sampling point to at least one view by utilizing the camera parameter, performing color sampling, and carrying out weighted fusion on sampling colors to obtain an output color of the texture sampling point; And generating a texture map based on the output colors of the texture sampling points in the texture parameter domain, and deriving the texture map.
- 2. The method according to claim 1, wherein determining vertex view weights for mesh vertices of the three-dimensional mesh model under each view based on the three-dimensional mesh model and the camera parameters, comprises: aiming at any grid vertex and any view viewpoint, calculating Euclidean distance and included angle of line of sight between the grid vertex and the viewpoint; constructing an un-normalized vertex view weight of the grid vertex under the view based on the Euclidean distance and the included angle of the line of sight; the non-normalized vertex view weight is determined by a distance item and an included angle item together, and adjustable parameters are introduced to respectively adjust the contribution ratio of the distance item and the included angle item to the non-normalized vertex view weight.
- 3. The method of claim 2, wherein determining vertex view weights for mesh vertices of the three-dimensional mesh model under each view based on the three-dimensional mesh model and the camera parameters, in particular further comprises: When the grid vertexes are not visible in the target view or exceed a preset view angle range, resetting the non-normalized vertex view weight corresponding to the target view to 0; and normalizing the non-normalized vertex view weights of the same grid vertex under multiple views to obtain the vertex view weights of the grid vertex under each view.
- 4. The method according to claim 1, wherein constructing the texture parameter field of the three-dimensional mesh model and determining the set of texture sampling points to be synthesized in the texture parameter field comprises: performing texture expansion on the three-dimensional grid model to obtain a UV atlas, and dividing the UV atlas into a plurality of texture blocks, wherein each texture block consists of a group of adjacent triangular patches; determining a texture map range corresponding to each texture block according to the bounding box size of the texture block, and determining the texture resolution of the texture block; and establishing a corresponding relation between UV vertex coordinates of the triangle patches in the UV map set and geometric vertex coordinates of the triangle patches for each triangle patch in each texture block so as to ensure texture mapping consistency.
- 5. The method according to claim 4, wherein for any texture sampling point, determining a target triangle patch corresponding to the texture sampling point, and calculating barycentric coordinates of the texture sampling point in the target triangle patch, specifically comprises: Performing pixel traversal on the texture map of each texture block to obtain a current texture sampling point; Calculating the barycentric coordinates of the current texture sampling point based on the UV vertex coordinates of the target triangle patch aiming at the current texture sampling point; And when the barycentric coordinates meet a preset internal judging condition, determining that the current texture sampling point belongs to the target triangle patch.
- 6. The method according to claim 1, wherein interpolating vertex view weights of three vertices of the target triangle patch according to the barycentric coordinates, to obtain view blending weights of the texture sampling points, specifically comprises: And taking the barycentric coordinates as interpolation coefficients, linearly combining the vertex view weights of the three vertexes of the target triangle patch under each view to obtain the view mixing weight of the texture sampling point for each view, and resetting the view mixing weight corresponding to the invisible view to be 0.
- 7. The method according to claim 1, wherein mapping the texture sample points to at least one view and performing color sampling using the camera parameters in accordance with the view blending weights, and weighting and fusing sample colors to obtain an output color of the texture sample points, comprises: Normalizing the view mixing weight; Projecting the texture sampling points to the image plane positions of all the visible views by utilizing the camera parameters; interpolation sampling is adopted for the image plane position of each visible view to obtain sampling colors; and carrying out weighted fusion on the sampling colors according to the normalized view mixing weights to obtain the output colors.
- 8. A multi-view texture synthesis apparatus, comprising: The data acquisition module is used for acquiring a three-dimensional grid model to be textured, a multi-view image set and camera parameters corresponding to the multi-view image set, and determining vertex view weights of grid vertices of the three-dimensional grid model under each view based on the three-dimensional grid model and the camera parameters; the sampling point determining module is used for constructing a texture parameter domain of the three-dimensional grid model and determining a texture sampling point set to be synthesized in the texture parameter domain; The mixed weight determining module is used for determining a target triangle patch corresponding to any texture sampling point, calculating the barycenter coordinates of the texture sampling point in the target triangle patch, and interpolating the vertex view weights of three vertexes of the target triangle patch according to the barycenter coordinates to obtain the view mixed weight of the texture sampling point; the output color determining module is used for mapping the texture sampling point to at least one view by utilizing the camera parameters according to the view mixing weight, performing color sampling, and performing weighted fusion on sampling colors to obtain the output color of the texture sampling point; and the texture map output module is used for generating a texture map based on the output color of each texture sampling point in the texture parameter domain and deriving the texture map.
- 9. An electronic device comprising a processor, a memory and a bus, the memory storing machine-readable instructions executable by the processor, the processor and the memory in communication over the bus when the electronic device is in operation, the machine-readable instructions when executed by the processor performing the steps of the multi-view texture synthesis method of any one of claims 1 to 7.
- 10. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon a computer program which, when executed by a processor, performs the steps of the multi-view texture synthesis method according to any of claims 1 to 7.
Description
Multi-view texture synthesis method and device, electronic equipment and storage medium Technical Field The disclosure relates to the technical field of three-dimensional reconstruction, in particular to a multi-view texture synthesis method, a multi-view texture synthesis device, electronic equipment and a storage medium. Background In applications such as three-dimensional reconstruction, digital twinning, cultural heritage digitization, virtual reality/augmented reality, it is often necessary to generate high quality textures for a three-dimensional mesh model so that the model surface has a real, continuous and detailed appearance. Existing multi-view texture synthesis typically uses a plurality of images of known camera pose as texture sources, projects model surface points into several views for sampling, and selects or fuses colors among the multiple views by a certain strategy, thereby forming a final texture map. However, in actual engineering, view selection or weight allocation of multi-view texture synthesis is unstable. The partial scheme only carries out view selection on the surface patch or the point according to a single factor (such as distance, view angle or definition), so that frequent switching of source views of adjacent pixels is easy to occur, obvious edge joints and color jump occur in texture blocks or among the blocks, and the overall continuity is influenced. If the weight is calculated only for the whole surface patch or the center point of the surface patch, it is difficult to provide continuously variable fusion weights for each texture sampling point in the texture parameter domain, and artifacts such as local blurring, jaggies or local broaching are easy to generate particularly at complex curved surfaces, elongated triangles or view coverage boundaries. Disclosure of Invention The embodiment of the disclosure provides at least a multi-view texture synthesis method, a device, an electronic device and a storage medium, wherein vertex view weights are built by integrating Euclidean distances and included angles of lines of sight at vertex levels, the vertex weights are interpolated into view mixing weights at sampling point levels by utilizing gravity coordinates of texture sampling points relative to a target triangle patch in texture parameter domains, and then weighting fusion is realized by combining visibility inhibition, weight normalization and multi-view projection interpolation sampling, so that fusion weights continuously change in the patch, view jump and seam artifacts of adjacent sampling points are reduced, continuity and consistency of texture synthesis are improved, invalid view interference is restrained under shielding or view adverse conditions, and a more stable and natural final texture map is obtained and texture block packing and derivation are facilitated. The embodiment of the disclosure provides a multi-view texture synthesis method, which comprises the following steps: Acquiring a three-dimensional grid model to be textured, a multi-view image set and camera parameters corresponding to the multi-view image set, and determining vertex view weights of grid vertices of the three-dimensional grid model under each view based on the three-dimensional grid model and the camera parameters; Constructing a texture parameter domain of the three-dimensional grid model, and determining a texture sampling point set to be synthesized in the texture parameter domain; Determining a target triangle patch corresponding to any texture sampling point, calculating the barycentric coordinates of the texture sampling point in the target triangle patch, and interpolating the vertex view weights of three vertexes of the target triangle patch according to the barycentric coordinates to obtain the view mixing weight of the texture sampling point; According to the view mixing weight, mapping the texture sampling point to at least one view by utilizing the camera parameter, performing color sampling, and carrying out weighted fusion on sampling colors to obtain an output color of the texture sampling point; And generating a texture map based on the output colors of the texture sampling points in the texture parameter domain, and deriving the texture map. In an optional implementation manner, determining vertex view weights of grid vertices of the three-dimensional grid model under each view based on the three-dimensional grid model and the camera parameters specifically includes: aiming at any grid vertex and any view viewpoint, calculating Euclidean distance and included angle of line of sight between the grid vertex and the viewpoint; constructing an un-normalized vertex view weight of the grid vertex under the view based on the Euclidean distance and the included angle of the line of sight; the non-normalized vertex view weight is determined by a distance item and an included angle item together, and adjustable parameters are introduced to respectively adjust the contribution