EP-4068216-B1 - 3D DIGITAL MODEL SURFACE RENDERING AND CONVERSION
Inventors
- COFFEY, DANE M.
- SCERBO, SIROBERTO
- GOLDBERG, EVAN M.
- SCHROERS, CHRISTOPHER RICHARD
- BAKER, DANIEL L.
- Mine, Mark R.
- DOGGETT, ERIKA VARIS
Dates
- Publication Date
- 20260506
- Application Date
- 20220228
Claims (12)
- A method comprising: receiving a three-dimensional (3D) digital model (134, 334, 434) represented by a mesh having a first mesh element count; surrounding the 3D digital model (134, 334, 434) with a plurality of virtual cameras (340a, 340b, 440a, 440b) oriented toward the 3D digital model (134, 334, 434) ; generating, using the plurality of virtual cameras (340a, 340b, 440a, 440b), a plurality of renders of the 3D digital model (134, 334, 434); generating a UV texture coordinate space for a surface projection (146, 546) of the 3D digital model (134, 334, 434) having a reduced mesh element count with respect to the first mesh element count; transferring, using the plurality of renders, lighting color values for each of a plurality of surface portions of the 3D digital model (134, 334, 434) to the UV texture coordinate space to produce the surface projection (146, 546) of the 3D digital model (134, 334, 434) having the reduced mesh element count; and displaying the surface projection (146, 546) of the 3D digital model (134, 334, 434) having the reduced mesh element count on a display (112, 132).
- The method of claim 1, wherein the lighting color values comprise view independent lighting color values, and wherein transferring the lighting color values to the UV texture coordinate space further comprises: sampling, using the plurality of renders, a plurality of red-green-blue (RGB) color values corresponding respectively to each of the plurality of surface portions of the 3D digital model (134, 334, 434); and omitting view dependent lighting color values corresponding to light that is at least one of reflected or refracted from one or more of the plurality of surface portions from the RGB color values corresponding to the one or more of the plurality of surface portions to produce the lighting color values for transferring to the UV texture coordinate space.
- The method of claim 1 or 2, wherein the lighting color values are transferred to the UV texture coordinate space as a beauty lighting layer of the surface projection (146, 546) of the 3D digital model (134, 334, 434).
- The method of at least one of claims 1 to 3, further comprising: transferring the omitted view dependent lighting color values to the UV texture coordinate space as a view dependent lighting layer of the surface projection (146, 546) of the 3D digital model (134, 334, 434).
- The method of at least one of claims 1 to 4, wherein before generating the plurality of renders of the 3D digital model (134, 334, 434), the method further comprises: determining from among the plurality of virtual cameras (340a, 340b, 440a, 440b), a respective best virtual camera (340a, 340b, 440a, 440b) for rendering each of the plurality of surface portions of the 3D digital model (134, 334, 434); wherein generating the plurality of renders of the 3D digital model (134, 334, 434) comprises rendering each of the plurality of surface portions using the respective best virtual camera (340a, 340b, 440a, 440b).
- The method of claim 5, wherein the respective best virtual camera (340a, 340b, 440a, 440b) is determined based on at least one of a pixel density of each of the surface portions from the perspective of the respective best virtual camera (340a, 340b, 440a, 440b) or a lens orientation of the respective best virtual camera (340a, 340b, 440a, 440b) relative to an axis perpendicular to each of the surface portions.
- The method of at least one of claims 1 to 6, further comprising: obtaining, by using the plurality of renders, at least one additional arbitrary output variable (AOV) layer describing the 3D digital model (134, 334, 434); and transferring a plurality of AOV values from the at least one additional AOV layer to the UV texture coordinate space.
- The method of claim 7, wherein the plurality of AOV values comprise UV texture coordinates of the 3D digital model (134, 334, 434).
- The method of claim 7 or 8, wherein the at least one additional AOV layer comprises at least one of an ambient occlusion layer or a normal vector layer.
- The method of at least one of claims 1 to 9, wherein the surface projection (146, 546) of the 3D digital model (134, 334, 434) includes one or more surface voids, the method further comprising: inpainting the one or more surface voids.
- The method of claim 10, wherein inpainting each of the one or more surface voids is performed using a partial differential equation (PDE) interpolation technique based on one or more lighting color values at pixels of the surface projection (146, 546) of the 3D digital model (134, 334, 434) at a boundary of each of the one or more surface voids.
- An image processing system (100) comprising: a computing platform (102) having a processing hardware (104), and a system memory (106) storing a software code (110); the processing hardware (104) being adapted to execute the software code (110) to perform the method of any of claims 1-11.
Description
BACKGROUND The process for creating and rendering the types of three-dimensional (3D) digital assets used in content production is complicated and very often proprietary to the studio producing the asset. Complex custom pipelines that involve many steps and the participation of many artists are typically needed to generate the final image render. Production tools and shaders are developed in-house. Even entirely proprietary renderer engines, which are functional only in the specific pipeline of a particular studio, may be used to produce the final render of the asset. Moreover, a production pipeline within the same studio typically evolves to meet the needs of each new production project. Therefore, an asset produced using a legacy version of a particular pipeline may not be compatible with that pipeline in its evolved form. KR 101817753B1 discloses a polynomial texture mapping (PTM) generating system for improving a shape of a 3D model and a PTM generating method using the same capable of generating PTM to improve definition and a shape of a surface from the 3D model generated through 3D scanning of a cultural heritage. The PTM generating system for improving the shape of the 3D model comprises: a 3D model obtaining unit; an information extracting unit; and a PTM information generating unit. The 3D model obtaining unit obtains the 3D model for a specific object. The information extracting unit sets a virtual camera and virtual lamps based on the obtained 3D model and extracts texture information and depth information of the object from the obtained 3D model by using the set virtual camera and the virtual lamps. The PTM information generating unit generates PTM information by combining the texture information and the depth information. For the foregoing reasons, interoperability of 3D digital assets among studios, or among different pipeline versions in the same studio can be a difficult and often labor intensive process. Additionally, taking an off-line rendered digital asset and recreating an accurate representation of it in a real-time game engine pipeline presents further difficulties. In many cases the original model, shaders, and renders can only be used for visual reference, and the asset must be entirely recreated in a manual process. Due to its intense reliance on human participation, this manual re-modeling, re-texturing, and re-shading is both undesirably costly and time consuming. It also risks a result that does not accurately match the original art direction, as simplifications may be required and some aspects are only generated at render-time, such as displacement maps that offset the surface of a mesh thereby changing its shape, and procedural geometries that are generated with code as opposed to being created externally and loaded, to name merely two examples. Consequently, there is a need in the art for a solution for converting high complexity digital assets produced by a particular studio or using a particular pipeline so as to be interoperable with other studios, pipelines, and platforms. The present invention provides a method and a system according to the independent claims to better comply with this need. Examples and embodiments of the present invention are as follows: An image processing system comprising: a computing platform including a processing hardware, a display, and a system memory storing a software code;the processing hardware configured to execute the software code to: receive a three-dimensional (3D) digital model represented by a mesh having a first mesh element count;surround the 3D digital model with a plurality of virtual cameras oriented toward the 3D digital model;generate, using the plurality of virtual cameras, a plurality of renders of the 3D digital model;generate a UV texture coordinate space for a surface projection of the 3D digital model having a reduced mesh element count with respect to the first mesh element count;transfer, using the plurality of renders, lighting color values for each of a plurality of surface portions of the 3D digital model to the UV texture coordinate space to produce the surface projection of the 3D digital model having the reduced mesh element count; anddisplaying the surface projection of the 3D digital model having the reduced mesh element count on a display. According to an embodiment, the image processing system, wherein the lighting color values comprise view independent lighting color values, and wherein to transfer the lighting color values to the UV texture coordinate space, the processing hardware is further configured to execute the software code to: sample, using the plurality of renders, a plurality of red-green-blue (RGB) color values corresponding respectively to each of the plurality of surface portions of the 3D digital model; andomit view dependent lighting color values corresponding to light that is at least one of reflected or refracted from one or more of the plurality of surface portions from the RGB color val