CN-121982184-A - Three-dimensional modeling method, apparatus, electronic device, storage medium, and program product
Abstract
The application discloses a three-dimensional modeling method, a three-dimensional modeling device, electronic equipment, a storage medium and a program product, and belongs to the technical field of three-dimensional modeling. The three-dimensional modeling method comprises the steps of obtaining an initial three-dimensional model corresponding to a target scene and camera parameters corresponding to each original image based on multi-frame original images corresponding to the target scene, inputting the initial three-dimensional model and the camera parameters into a nerve rendering model to obtain target texture data output by the nerve rendering model, and processing a low-surface number three-dimensional model corresponding to the initial three-dimensional model based on the target texture data to obtain a target three-dimensional model corresponding to the target scene. The three-dimensional modeling method does not need to rely on manual secondary creation to improve the image quality, saves the labor cost, reduces the data volume required to be processed by the nerve rendering model, and realizes the rendering effect of high fidelity on the basis of ensuring the smoothness of the image.
Inventors
- LIN FEI
- HAN WEI
- ZENG YI
- JIANG BIN
- CHEN Jin
- Liu Fenger
- FAN MENGDI
- YANG ZHENYU
- ZHANG XINRAN
- XIAO LI
Assignees
- 北京元客方舟科技有限公司
- 中央广播电视总台
Dates
- Publication Date
- 20260505
- Application Date
- 20251209
Claims (11)
- 1. A method of three-dimensional modeling, comprising: based on a multi-frame original image corresponding to a target scene, obtaining an initial three-dimensional model corresponding to the target scene and camera parameters corresponding to each original image; Inputting the initial three-dimensional model and the camera parameters into a nerve rendering model to obtain target texture data output by the nerve rendering model; and processing the low-surface number three-dimensional model corresponding to the initial three-dimensional model based on the target texture data to obtain a target three-dimensional model corresponding to the target scene.
- 2. The three-dimensional modeling method according to claim 1, wherein the obtaining the target texture data output by the neural rendering model includes: dividing the initial three-dimensional model into a plurality of sub-surface patches by adopting meta-representation, and generating potential codes corresponding to the sub-surface patches; Processing the potential codes based on the camera parameters to obtain theoretical rendering colors corresponding to the target texture data; and obtaining the target texture data based on the theoretical rendering color and the camera parameters.
- 3. The method of three-dimensional modeling according to claim 2, wherein said generating potential codes corresponding to each of said subsurface patches comprises: Generating initial potential codes corresponding to the subsurface patches; and correcting each initial potential code by adopting a meta-deformation manifold based on the camera parameters to obtain each potential code.
- 4. A method according to claim 3, wherein said correcting each of said initial potential codes using meta-deformation manifolds based on said camera parameters to obtain each of said potential codes comprises: Calculating offset information corresponding to each initial potential code based on coordinate information corresponding to each subsurface patch and a camera ray direction, wherein the camera ray direction is determined based on the camera parameters; Performing offset processing on each initial potential code based on each offset information to obtain meta-deformation embedding; and performing surface mapping processing on the meta-deformation embedding to obtain the potential code.
- 5. The three-dimensional modeling method according to claim 2, wherein the processing the potential code based on the camera parameters to obtain the theoretical rendering color corresponding to the target texture data includes: decoding the potential codes to obtain geometrical attribute information corresponding to the subsurface patches, wherein the geometrical attribute information comprises surface normals, diffuse reflection colors, specular albedo and high-dimensional space characteristics; calculating to obtain a specular reflectivity based on the surface normal, the specular albedo and the high-dimensional spatial feature; And obtaining the theoretical rendering color based on the diffuse reflection color, the specular reflection rate and the camera parameters.
- 6. The three-dimensional modeling method of claim 2, wherein the deriving the target texture data based on the theoretical rendering color and the camera parameters comprises: Acquiring at least one intersecting triangle between the camera ray direction and the initial three-dimensional model, wherein theoretical rendering colors correspond to the intersection point of each intersecting triangle; Performing aggregation treatment on at least one theoretical rendering color to obtain a target rendering color corresponding to the camera ray direction; And obtaining the target texture data based on the target rendering color.
- 7. The three-dimensional modeling method according to any one of claims 1 to 6, wherein the processing the low-surface-number three-dimensional model corresponding to the initial three-dimensional model based on the target texture data to obtain the target three-dimensional model corresponding to the target scene includes: Generating a target texture map based on the target texture data; And adding the target texture map to the low-surface number three-dimensional model to obtain the target three-dimensional model.
- 8. A three-dimensional modeling apparatus, comprising: the first processing module is used for obtaining an initial three-dimensional model corresponding to a target scene and camera parameters corresponding to each original image based on multiple frames of original images corresponding to the target scene; The second processing module is used for inputting the initial three-dimensional model and the camera parameters into a nerve rendering model to obtain target texture data output by the nerve rendering model; And the third processing module is used for processing the low-surface number three-dimensional model corresponding to the initial three-dimensional model based on the target texture data to obtain a target three-dimensional model corresponding to the target scene.
- 9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the three-dimensional modeling method of any of claims 1-7 when the program is executed by the processor.
- 10. A non-transitory computer readable storage medium, having stored thereon a computer program, which when executed by a processor implements the three-dimensional modeling method according to any of claims 1-7.
- 11. A computer program product comprising a computer program, characterized in that the computer program, when executed by a processor, implements the three-dimensional modeling method according to any of claims 1-7.
Description
Three-dimensional modeling method, apparatus, electronic device, storage medium, and program product Technical Field The present application belongs to the technical field of three-dimensional modeling, and in particular, relates to a three-dimensional modeling method, apparatus, electronic device, storage medium, and program product. Background Currently, digital twin technology taking immersive experience as a main service mode usually displays a three-dimensional scene with a real-scene impression to an audience in a real-time rendering mode, a bottom technology supporting a real-time rendering effect comes from three-dimensional modeling, and most model data needed to construct the immersive scene comes from data support of the real-scene three-dimensional modeling, so how to enable geometry and texture of the real-scene model to realize high-fidelity presentation is a key data base supporting the immersive experience. In the related art, a method for manufacturing through CG (computer graphics) exists, but the method needs to process a large amount of data when supporting data for immersive experience, so that the model volume is large, too much content can cause scene jamming when an engine processes, and a real-scene model needs to be created for a second time by means of manpower in the process of improving the image quality, so that a large amount of labor cost is consumed. Disclosure of Invention The present invention aims to solve at least one of the technical problems existing in the prior art. Therefore, the invention provides the three-dimensional modeling method, the device, the electronic equipment, the storage medium and the program product, which do not need to rely on manual secondary creation to improve the image quality, save the labor cost, reduce the data volume required to be processed by the nerve rendering model, and realize the rendering effect of high fidelity on the basis of ensuring the smoothness of the image. In a first aspect, the present application provides a three-dimensional modeling method, including: based on a multi-frame original image corresponding to a target scene, obtaining an initial three-dimensional model corresponding to the target scene and camera parameters corresponding to each original image; Inputting the initial three-dimensional model and the camera parameters into a nerve rendering model to obtain target texture data output by the nerve rendering model; and processing the low-surface number three-dimensional model corresponding to the initial three-dimensional model based on the target texture data to obtain a target three-dimensional model corresponding to the target scene. According to the three-dimensional modeling method provided by the embodiment of the application, the high-quality target texture data is generated by inputting the high-surface-number initial three-dimensional model into the nerve rendering model, and then the texture mapping processing is carried out on the low-surface-number three-dimensional model corresponding to the initial three-dimensional model based on the target texture data to obtain the target three-dimensional model, so that the image quality is improved without relying on manual secondary creation, the labor cost is saved, the data volume required to be processed by the nerve rendering model is reduced, and the rendering effect of high fidelity is realized on the basis of ensuring the smoothness of pictures. The three-dimensional modeling method according to an embodiment of the present application, the obtaining target texture data output by the neural rendering model, includes: dividing the initial three-dimensional model into a plurality of sub-surface patches by adopting meta-representation, and generating potential codes corresponding to the sub-surface patches; Processing the potential codes based on the camera parameters to obtain theoretical rendering colors corresponding to the target texture data; and obtaining the target texture data based on the theoretical rendering color and the camera parameters. The three-dimensional modeling method according to one embodiment of the present application generates potential codes corresponding to the subsurface patches, including: Generating initial potential codes corresponding to the subsurface patches; and correcting each initial potential code by adopting a meta-deformation manifold based on the camera parameters to obtain each potential code. The three-dimensional modeling method according to an embodiment of the present application corrects each of the initial potential codes by using meta-deformation manifold based on the camera parameters, to obtain each of the potential codes, including: Calculating offset information corresponding to each initial potential code based on coordinate information corresponding to each subsurface patch and a camera ray direction, wherein the camera ray direction is determined based on the camera parameters; Performing offset proc