Search

CN-122023620-A - Real-time fluid rendering method based on neural network model

CN122023620ACN 122023620 ACN122023620 ACN 122023620ACN-122023620-A

Abstract

The invention provides a real-time fluid rendering method based on a neural network model, and relates to the technical field of intersection of computer graphics and deep learning. According to the method, accurate modeling and efficient rendering of fluid physical properties and motion states are achieved by constructing a lightweight fluid feature extraction network, a dynamic fluid state prediction network and a real-time rendering optimization network. Firstly, original fluid parameters are normalized and characteristic enhanced by utilizing a multidimensional fluid data preprocessing module, then density field, speed field and surface tension distribution of fluid are rapidly predicted by a pre-trained neural network model, finally, the fluid rendering frame rate is increased to more than 60fps on the premise of guaranteeing the sense of reality of rendering effect by combining an adaptive illumination mapping algorithm and a real-time rendering pipeline, and application requirements of real-time interaction scenes such as virtual reality, game engines, film and television special effects are met. The invention solves the technical bottlenecks of high physical simulation complexity and poor real-time performance in the traditional fluid rendering method, and simultaneously improves the reality of the visual effects such as fluid surface details, light and shadow reflection and the like.

Inventors

  • WANG YIDING
  • LI YUANYI
  • YUAN DEKUI
  • SUN JIAN

Assignees

  • 天津大学

Dates

Publication Date
20260512
Application Date
20251219

Claims (5)

  1. 1. A neural network model-based real-time fluid rendering method, comprising the steps of: (1) The method comprises the steps of preprocessing fluid data, namely collecting physical parameters of the fluid, wherein the physical parameters comprise density, viscosity and surface tension coefficients, environment parameters comprise illumination intensity, refractive index and environment temperature, and initial motion state parameters comprise speed vectors and position coordinates, constructing a multi-dimensional fluid data set, and carrying out normalization processing, outlier rejection and characteristic dimension expansion on the data set to obtain standardized training data; (2) The method comprises the steps of constructing a neural network model, namely constructing a composite neural network model consisting of a fluid characteristic extraction network, a fluid state prediction network and a rendering optimization network, wherein the fluid characteristic extraction network adopts a light convolutional neural network CNN architecture and comprises 3 convolutional layers, 2 pooling layers and 1 batch normalization layer, wherein the fluid characteristic extraction network is used for extracting key characteristic vectors in fluid data; (3) Dividing standardized training data into a training set and a verification set according to a ratio of 7:3, adopting a self-adaptive moment estimation Adam optimizer, carrying out iterative training on a composite neural network model by taking a fluid physical state prediction error and a rendering visual similarity as a joint loss function, avoiding model overfitting through an early-stop mechanism, and carrying out light optimization on the trained model by adopting a model quantization and pruning technology, thereby reducing calculation cost during model reasoning; (4) In the actual application scene, the current state parameters and environment parameters of the fluid are collected in real time and are input into a neural network model after light weight optimization, the model rapidly outputs the density field, the speed field and the surface rendering parameters of the fluid, the parameters are transmitted into a real-time rendering pipeline, the Phong illumination model and the self-adaptive antialiasing algorithm are combined, the real-time rendering output of the fluid image is completed, the repeated calculation is reduced through an inter-frame buffer multiplexing technology in the rendering process, and the rendering frame rate is ensured to be stabilized above 60 fps.
  2. 2. The neural network model-based real-time fluid rendering method according to claim 1 is characterized in that the feature dimension expansion in the step (1) is specifically that gradient calculation is carried out on a velocity vector of fluid to obtain acceleration features, time-space domain filtering is carried out on density parameters to obtain density change rate features, ambient illumination parameters are converted into HSL color space features, and the HSL color space features and the original parameters are fused to form a multi-dimensional feature matrix.
  3. 3. The neural network model-based real-time fluid rendering method of claim 1, wherein the joint Loss function in step (2) is obtained by weighted summation of a physical Loss term and a visual Loss term, wherein the physical Loss term calculates the difference between the predicted fluid physical parameter and the real physical parameter by using a mean square error MSE, the visual Loss term uses a weighted combination of a structural similarity index SSIM and a peak signal-to-noise ratio PSNR, and a calculation formula is loss=α×mse+β× (1-SSIM) +γ× (1/PSNR), wherein α, β, γ are weight coefficients, and α+β+γ=1.
  4. 4. The method for real-time fluid rendering based on a neural network model according to claim 1, wherein in the step (3), the model quantization is specifically to quantize a model weight parameter from 32-bit floating point to 16-bit semi-precision floating point number, a structured pruning strategy is adopted for model pruning, connection and convolution kernels with a weight absolute value smaller than a threshold in the neural network are removed, and the pruning threshold is determined through verification of verification set performance.
  5. 5. The method for real-time fluid rendering based on neural network model according to claim 1, wherein the inter-frame buffer multiplexing technique in step (4) is specifically to buffer the surface texture and the shadow parameters of the fluid output by the previous frame rendering, recalculate only the changed region during the current frame rendering, directly multiplex the buffered data for the unchanged region, and simultaneously use the temporal supersampling technique to improve the image detail.

Description

Real-time fluid rendering method based on neural network model Technical Field The invention relates to the technical fields of computer graphics, deep learning and real-time rendering, in particular to a real-time fluid rendering method based on a neural network model. Background Fluid rendering is one of the core technologies in computer graphics, with the goal of simulating and rendering fluids (e.g., liquids, gases, fumes, etc.) with realistic physical properties and visual effects in a computer. The traditional fluid rendering method is mainly based on physical equation solving, such as Euler method, lagrange method and the like, and is used for simulating the motion state of fluid by solving a Navier-Stokes equation through numerical calculation and then generating a fluid image by combining a rendering algorithm. However, the method has obvious technical defects that on one hand, the numerical solution process of a physical equation has extremely high computational complexity, a large amount of computational resources are required to be consumed, the rendering frame rate is low, the requirements of real-time interaction scenes (such as games and VR equipment) on the frame rate (which is generally more than 60 fps) are difficult to meet, and on the other hand, the traditional method has inaccurate simulation on visual effects such as fluid surface details, light and shadow reflection and the like, is easy to cause problems such as surface blurring, light and shadow distortion and the like, and influences the realism of the rendering effect. In recent years, with the development of deep learning technology, partial research attempts are made to apply a neural network model to the field of fluid rendering, and the physical rule and rendering characteristics of the fluid are learned through training the model, so that the traditional numerical solution process is replaced, and the rendering efficiency is improved. However, the conventional fluid rendering method based on the neural network still has the defects that the model is complex in structure, high in calculation cost in reasoning and difficult to realize real-time rendering, the model is insufficient in prediction accuracy of fluid states, physical state distortion is easy to occur particularly in complex motion scenes such as fluid collision, splitting and fusion, the reality of rendering effects is to be improved, and the detail processing such as fluid surface texture and light and shadow interaction is not perfect. Therefore, there is a need for a neural network fluid rendering method that combines real-time performance and realism, and solves the bottleneck in the prior art. Disclosure of Invention The invention aims to solve the defects in the prior art, and by constructing a lightweight and high-precision composite neural network model and combining a data preprocessing and rendering optimization technology, the rendering efficiency is obviously improved while the sense of reality of fluid rendering is ensured, and the application requirement of a real-time interaction scene is met. The technical scheme adopted by the invention is that the real-time fluid rendering method based on the neural network model comprises the following steps: (1) The method comprises the steps of preprocessing fluid data, namely collecting physical parameters of the fluid, wherein the physical parameters comprise density, viscosity and surface tension coefficients, environment parameters comprise illumination intensity, refractive index and environment temperature, and initial motion state parameters comprise speed vectors and position coordinates, constructing a multi-dimensional fluid data set, and carrying out normalization processing, outlier rejection and characteristic dimension expansion on the data set to obtain standardized training data; (2) The method comprises the steps of constructing a neural network model, namely constructing a composite neural network model consisting of a fluid characteristic extraction network, a fluid state prediction network and a rendering optimization network, wherein the fluid characteristic extraction network adopts a light convolutional neural network CNN architecture and comprises 3 convolutional layers, 2 pooling layers and 1 batch normalization layer, wherein the fluid characteristic extraction network is used for extracting key characteristic vectors in fluid data; (3) Dividing standardized training data into a training set and a verification set according to a ratio of 7:3, adopting a self-adaptive moment estimation Adam optimizer, carrying out iterative training on a composite neural network model by taking a fluid physical state prediction error and a rendering visual similarity as a joint loss function, avoiding model overfitting through an early-stop mechanism, and carrying out light optimization on the trained model by adopting a model quantization and pruning technology, thereby reducing calculation cost during