CN-122023629-A - Efficient high-fidelity rendering method for large-scale CAE simulation data
Abstract
The invention relates to a high-efficiency high-fidelity rendering method for large-scale CAE simulation data, which is used for realizing high-efficiency watertight rendering through two core stages, namely, firstly, reconstructing watertight textures, parameterizing each base grid slice into a square texture domain with aligned axes, converting texture coordinates into integers to avoid floating point number precision errors, combining manual bilinear interpolation to ensure that boundary displacement values of adjacent grid slices are consistent, secondly, calculating adaptive subdivision factors, mapping screen space pixel errors into world space geometric errors, combining the unique identification coding characteristics of the patches, predicting optimal subdivision factors by using a three-layer fully-connected neural network, averaging subdivision factors of shared edges of adjacent patches, and then rounding upwards. The invention realizes real-time crack-free rendering of a large-scale model, gives consideration to visual fidelity and calculation efficiency, is highly compatible with standard hardware subdivision pipelines, and can be widely applied to the fields of computer graphics, virtual reality, digital content creation and the like.
Inventors
- XU JIAMIN
- XU JIANGJIE
- XU GANG
- TANG YUEHUI
Assignees
- 杭州电子科技大学
Dates
- Publication Date
- 20260512
- Application Date
- 20260416
Claims (9)
- 1. A high-efficiency high-fidelity rendering method for large-scale CAE simulation data is characterized by comprising a watertight texture reconstruction stage and an adaptive subdivision factor calculation stage: The watertight texture reconstruction stage simplifies an original high-mode grid into a low-mode grid, allocates independent axis pairs Ji Fangxing texture fields for each quadrilateral surface patch, converts texture coordinates into integer pixel coordinates, generates a displacement map and a normal map through linear interpolation sampling and baking, and ensures that boundary displacement values of adjacent surface patches are consistent; The self-adaptive subdivision factor calculation stage maps the screen space pixel errors into world space geometric errors, combines the unique identification coding characteristics of the patches, predicts the internal subdivision factors through a three-layer fully-connected neural network to obtain the edge subdivision factors, averages the subdivision factors of the shared edges of the adjacent patches and then rounds the average of the subdivision factors upwards; And rendering the discrete grid model based on the displacement map, the normal map and the final subdivision factors, so as to realize watertight high-efficiency subdivision rendering of the large-scale discrete model.
- 2. The method for efficient and high-fidelity rendering of large-scale CAE simulation data according to claim 1, wherein the watertight texture reconstruction stage comprises the following steps: Preprocessing a high-modulus grid, simplifying the number of faces, repairing a non-manifold structure, generating a quadrilateral surface patch and distributing independent square texture domains; converting texture coordinates into integers, and manually realizing sampling and interpolation based on integer pixel indexes; and baking and sampling, calculating grid positions and normals corresponding to the pixels, and generating a displacement map and a normals map through ray intersection.
- 3. The method for efficient and high-fidelity rendering of large-scale CAE simulation data according to claim 1, wherein the adaptive subdivision factor calculation stage specifically comprises the following steps: Calculating a world space maximum geometric error corresponding to the screen space pixel error; Extracting feature vectors of the patches, and inputting three layers of fully-connected neural network models to obtain initial subdivision factors; the final subdivision factors for the shared edge and the internal edge are determined.
- 4. The method for high-efficiency and high-fidelity rendering of large-scale CAE simulation data according to claim 2, wherein in the watertight texture reconstruction stage, high-modulus grids are preprocessed, the method comprises the steps of simplifying the number of grid surfaces, repairing non-manifold structures, intelligently pairing triangular surfaces into quadrilaterals, discarding original UV (ultraviolet) and distributing non-overlapping rectangular tiles for each quadrilaterals, the boundary of each quadrilaterals is strictly aligned with four sides of a square texture domain, the shared 3D boundary of each adjacent patch corresponds to different sides in the respective texture domain, and sampling rules are completely equivalent.
- 5. The method for efficient and high-fidelity rendering of large-scale CAE simulation data according to claim 2, wherein in the watertight texture reconstruction stage, in the baking sampling process, an axis alignment pixel boundary frame surrounding a quadrilateral surface patch is calculated, each pixel in the boundary frame is traversed, grid positions and normals corresponding to the pixels are calculated through bilinear interpolation, and data are written into baking data.
- 6. The method for efficient and high-fidelity rendering of large-scale CAE simulation data as recited in claim 3, wherein said adaptive subdivision factor calculation stage is characterized in that a preset pixel threshold p is 1 pixel for determining a maximum allowable geometric deviation in world space.
- 7. The method for high-efficiency and high-fidelity rendering of large-scale CAE simulation data according to claim 3, wherein the three-layer fully connected neural network model has the structure of an input layer, a hidden layer and an output layer, inputs feature vectors and outputs continuous subdivision factors.
- 8. The method for efficient and high-fidelity rendering of large-scale CAE simulation data according to claim 1, wherein the texture coordinates are converted into integer pixel coordinates by: ; Where p is the pixel threshold, FOV is the camera vertical field angle, H px is the number of pixels in the vertical direction of the screen, and z is the spatial depth of the patch.
- 9. The method for efficient and high-fidelity rendering of large-scale CAE simulation data as recited in claim 7, wherein said input feature vector of said three-layer fully connected neural network comprises unique identification of patch allocation, maximum pixel geometry error of screen space, and MLP model.
Description
Efficient high-fidelity rendering method for large-scale CAE simulation data Technical Field The invention relates to a high-efficiency high-fidelity rendering method for large-scale CAE (large-scale discrete model) simulation data, which is suitable for real-time rendering, virtual reality, digital content creation and other scenes needing high-quality three-dimensional model visualization, and belongs to the technical field of computer graphics and computational geometry. Background The discrete grid has become a mainstream standard for three-dimensional geometric representation and rendering due to its regular structure and strong compatibility, and is widely used in the fields of computer graphics, virtual Reality (VR), augmented Reality (AR), digital content authoring (DCC), and the like. In three-dimensional model rendering, to enhance visual realism, geometric details are often required to be enhanced by high resolution mesh or auxiliary texture techniques (e.g., displacement mapping), but such methods always face a triple tradeoff of accuracy, efficiency, and visual consistency. Particularly in real-time rendering scenarios, limited hardware resource constraints make "balancing of fidelity and performance" a long-term unresolved core challenge. The existing rendering technology based on discrete grids has two core defects which cannot be avoided: Problem of UV coordinate seams and texture discontinuities The discrete grid projects the two-dimensional texture to the three-dimensional surface through a UV parameterized map, but UV segmentation of non-zero-deficit or complex topological models necessarily introduces seams (Seams). The seam not only can lead to visual breakage of material properties such as color, normal, roughness, and the like, but also can cause serious problems in displacement mapping or tessellation technology application, namely, the lack of geometric consistency between adjacent UV islands leads to dislocation of vertex positions, and further, visible cracks (Cracks) or geometric tearing are generated. The root cause of the method is that the traditional parameterization algorithm cannot ensure that the UV boundary of the adjacent grid slice on the shared 3D boundary has consistent sampling topology, so that sampling displacement value difference is caused, and the normalized floating point texture coordinates have precision errors, and tiny calculation errors during boundary sampling can cause texture interpolation to select different pixel neighborhoods, so that the displacement value is finally inconsistent, and the water tightness of the curved surface is destroyed (WATERTIGHTNESS). 2. Problem of difficult accurate determination of subdivision factors Tessellation (Tessellation) techniques optimize visual effects and computational performance by dynamically adjusting geometric details, but accurate determination of subdivision factors (i.e., the number of sub-triangle splits per patch) has been a technical difficulty. The traditional method relies on approximate estimation of an empirical formula (such as based on the linear distance and visual angle change of an object and a camera), the method does not fully consider the actual pixel coverage and the viewpoint dynamic change characteristic of a target object on a screen, ignores the actual geometric errors in a screen space, and has poor subdivision control effect, namely, geometric detail loss and visual distortion are caused by insufficient subdivision, hardware resource waste is caused by excessive subdivision, rendering frame rate is reduced, and the balance between 'pixel level precision' and 'efficient calculation' cannot be realized in real-time rendering. In the prior art, texture reconstruction and subdivision factor selection lack a unified efficient solution, so that water tightness damage caused by UV seams cannot be avoided, self-adaptive accurate subdivision control cannot be realized, and application of discrete grids in high-quality real-time visual scenes is severely limited. Therefore, there is a need for a rendering framework that is capable of adaptively sensing screen space errors, solving UV seam problems, and being compatible with general purpose graphics hardware, and that breaks through the bottleneck in the prior art. Disclosure of Invention In order to overcome the defects of the existing discrete grid rendering technology, the invention provides the high-efficiency high-fidelity rendering method for the large-scale CAE simulation data, which realizes real-time and crack-free rendering of a large-scale model on the premise of not remarkably increasing the complexity of the existing rendering pipeline, gives consideration to visual fidelity and computing efficiency, is highly compatible with a standard hardware subdivision pipeline, and can be widely applied to the fields of computer graphics, virtual reality, digital content creation and the like. A high-efficiency high-fidelity rendering method f