Search

CN-122023636-A - High-quality soft shadow rendering method

CN122023636ACN 122023636 ACN122023636 ACN 122023636ACN-122023636-A

Abstract

The invention relates to the technical field of real-time rendering in computer graphics and discloses a high-quality soft shadow rendering method, which comprises the steps of firstly obtaining the world space position and normal line of a fragment, carrying out geometric micro-shifting along the normal line to eliminate self shadow, then converting the normal line to a light source visual angle space to provide a direction reference, calculating a reference sampling coordinate, generating random sampling offset vectors, carrying out self-adaptive direction correction on each offset vector according to the normal line direction to ensure that all samples are directed to the outer side of an object, finally synthesizing the corrected offset and the reference coordinate into a final sampling point, obtaining a smooth soft shadow value through multiple sampling and mean value calculation, and basically avoiding invalid sampling directed to the back of the object by introducing scene geometric normal line information into a sampling process and dynamically correcting the offset direction of each sampling point according to the scene geometric normal line information.

Inventors

  • YAN CHEN
  • Ning Huanxiang

Assignees

  • 上海迅图数码科技有限公司

Dates

Publication Date
20260512
Application Date
20260130

Claims (10)

  1. 1. A method of high quality soft shadow rendering comprising the steps of: step one, obtaining basic geometric information, namely obtaining accurate position and orientation information of a fragment to be calculated in world space; performing geometric micro-shifting, namely performing micro-shifting on the acquired world space position along the normal direction of the world space position; step three, converting the normal line to a shadow map space, namely converting the world space normal line to a viewing angle space of the light source; step four, calculating shadow map reference sampling coordinates, namely converting the position of the segment after the micro-shift into a texture coordinate space under the view angle of a light source to be used as a center point of soft shadow sampling; generating random sampling offset, namely generating a group of uniformly distributed and randomized two-dimensional offset vectors in the unit disc; Step six, executing self-adaptive correction of the offset direction, namely screening and correcting all sampling offset vectors according to the normal direction of the equipment space; step seven, synthesizing final sampling texture coordinates, namely superposing the corrected offset onto the reference texture coordinates to form a group of coordinates for actually sampling the shadow map; And step eight, calculating an average soft shadow value, namely obtaining a final sampling coordinate based on the generated special sampling offset vector, further calculating a smooth soft shadow value through multiple sampling, and finally synthesizing the pixel color with the vivid soft shadow.
  2. 2. The method of claim 1, wherein the step one of obtaining world space position and normal line is to obtain three-dimensional space coordinates of each pixel or segment in the rendering pipeline by vertex shader interpolation or screen space reconstruction technique And a direction vector perpendicular to the object surface at that point 。
  3. 3. The method of claim 1, wherein the step two is performed with a small translation along the normal line by moving the world space position World space normal normalized along it Move in direction by a small amount Obtaining a new position The calculation formula is as follows: ; In the formula (i), Represents the offset, the value range is 1 To 1 A world unit.
  4. 4. The method of claim 1, wherein the transforming the world normal to the device space normal direction in the third step is performed in the following two steps: S2.1, firstly, transforming world space normals through the linear part of the shadow map view projection matrix, and preparing for subsequent normalization, wherein the linear part of the shadow map view projection matrix is 3 multiplied by 3 For world space normal Transforming to obtain the direction of the device under the space of the shadow map The calculation formula is as follows: ; S2.2, obtained after the transformation again The XY component of (2) is normalized to obtain the normal direction of the equipment space Realizing that the normal line is aligned with the sampling plane of the shadow map 2D.
  5. 5. The method of claim 1, wherein the step four calculates reference shadow texture coordinates: S3.1, firstly, the matrix is seen through a shadow map Transform to light source view space, namely: ; In the formula (i), Representing the spatial position of the light source view obtained after transformation, A shadow map view matrix is represented, Representing the world space position after the micrometric movement obtained in the second step; S3.2, converting the light source NDC into a light source NDC through a projection matrix, namely: ; In the formula (i), Representing the transformed light source normalization equipment coordinate, wherein the coordinate range is [ -1,1], which is an intermediate carrier for connecting the three-dimensional space and the two-dimensional texture coordinate, A shadow map projection matrix for projecting three-dimensional coordinates of the light source view space onto a two-dimensional plane; s3.3, finally converting the NDC coordinates into texture coordinates The calculation formula is: ; ; In the formula (i), Representing the two-dimensional component of the light source NDC, mapping the [ -1,1] range to the [0,1] range of texture coordinates by linear transformation, Representing the depth component of the light source NDC, Representing a depth comparison for use in subsequent samples.
  6. 6. The method of claim 1, wherein in the fifth step, a poisson distribution is generated and randomly rotated: S4.1, pre-calculating or generating in real time, generating N two-dimensional offset vectors in a unit circle according to the poisson disc rule, and marking as The calculation formula is as follows: ; In the formula (i), E [0,1] represents the random radius, ∈[0,2 And represents a uniform angle; s4.2, the sequence has low difference characteristics in unit circle, and a random rotation angle is independently generated for each pixel ∈[0,2 Constructing a 2 x 2 orthogonal rotation matrix ; S4.3 applying the matrix to the corresponding position Vector, realize the arbitrary direction deflection in the two-dimensional plane, mathematical expression is: ; In the formula (i), Representing a random rotation angle, Representing the poisson offset vector after random rotation.
  7. 7. The method of claim 1, wherein the negative shift is reversed based on the normal direction in the sixth step, wherein for each shift vector that has been randomly rotated Calculate its normal direction to the device space Dot product of (2) The method comprises the following steps: ; Dot product of Below 0, indicating that the offset direction is approximately pointing inside the object, it is flipped to the normal forward direction using the reflection equation: =reflect( , ); When the dot product is greater than 0, then the original offset is kept unchanged: = ; When (when) When, namely: 。
  8. 8. The method of claim 1, wherein the superimposing offset in the seventh step generates a set of sampling points by combining each corrected offset vector Multiplying by a scaling factor controlling the radius of the sample Then it is matched with the reference texture coordinates Adding to obtain a final set of sampled texture coordinates The method comprises the following steps: ; In the formula (i), Representing the index of the sample point, Representing the corresponding reference texture coordinates of the current pixel in the shadow map.
  9. 9. The method of claim 1, wherein in the eighth step, the multi-sampling and mean value calculation is performed: S6.1, for each final sampling coordinate Reading depth values of corresponding positions from the shadow map Depth of current segment And (3) with Comparing, adding a very small depth offset, judging whether the sampling point is in shadow, and outputting binary result ; S6.2, after the binary judgment of all sampling points is finished, carrying out average value calculation on all results to realize a soft shadow smoothing effect, summing the binary results of n sampling points, and dividing the sum by the total sampling point number n to obtain a continuous shadow coefficient between [0,1] The value is the final smooth soft shadow value, and the mathematical expression is: ; In the formula, n represents the total number of sampling points, when calculated The closer the value is to 1, the more fully the segment is exposed to light and the lighter the shadow is, and the closer the value is to 0, the more severely the segment is blocked and the darker the shadow is.
  10. 10. The method of claim 9, wherein the step S6.1 outputs a binary result The mathematical expression is as follows: ; In the formula (i), Representing the depth value of the shadow map corresponding to the sampling point, Representing a very small depth offset.

Description

High-quality soft shadow rendering method Technical Field The invention relates to the technical field of real-time rendering in computer graphics, in particular to a high-quality soft shadow rendering method. Background In computer graphics, shadows are key elements that enhance the sense of realism of a scene and the sense of relative position in space of objects. Shadow map algorithms have become the most widely used technique for generating shadows due to their versatility and high efficiency. In order to simulate soft shadows generated by a surface light source, techniques such as percentage asymptotic filtering (PCF) or Variance Shadow Map (VSM) are often used for post-processing of shadow maps. However, one of the core challenges in practice of these techniques is the setting of the "shadow offset" parameter. This parameter is used to prevent "shadow acne" (self-shadow error) due to depth accuracy problems. Conventional practice is to use a fixed or global offset. However, this simple offset strategy is imperative in the face of complex illumination and geometry, and is particularly characterized by artificial banding problems, where the fixed offset is insufficient to overcome the discontinuities in depth comparisons when the light source irradiates the object surface at glancing angles, resulting in unnatural, banding-like jaggies and noise at the shadow edges, severely damaging visual quality. Shadow break problems, developers typically increase global offsets in order to avoid artificial banding at glancing angles. However, this causes excessive deflection of the shadow when the light source is illuminated from the front, which causes an unrealistic separation between the shadow and the projectile, i.e. "Pepal" phenomenon, where the object appears to float in the air. The workflow is cumbersome and the artist and programmer must manually adjust or pre-calculate a number of different offset parameters for different objects and illumination angles in the scene. The process is extremely tedious and non-intuitive, and can not ensure that the optimal effect can be obtained under all visual angles and illumination conditions, thereby seriously affecting the development efficiency. Accordingly, there is a strong need in the art for a shadow rendering scheme that can adapt to changes in illumination angles and automatically avoid the above-mentioned visual flaws. Disclosure of Invention (One) solving the technical problems Aiming at the defects of the prior art, the invention provides a high-quality soft shadow rendering method which has the advantages of strong self-adaptability, high rendering quality and no need of manual parameter adjustment, and solves the problem of manual strip or shadow disconnection which is necessarily generated under a complex illumination angle by the traditional fixed shadow migration method and the problem of complicated parameter adjustment workflow caused by the manual strip or shadow disconnection. (II) technical scheme In order to achieve the above purpose, the invention provides a high-quality soft shadow rendering method, which comprises the following steps: step one, obtaining basic geometric information, namely obtaining accurate position and orientation information of a fragment to be calculated in world space; performing geometric micro-shifting, namely performing micro-shifting on the acquired world space position along the normal direction of the world space position; step three, converting the normal line to a shadow map space, namely converting the world space normal line to a viewing angle space of the light source; step four, calculating shadow map reference sampling coordinates, namely converting the position of the segment after the micro-shift into a texture coordinate space under the view angle of a light source to be used as a center point of soft shadow sampling; generating random sampling offset, namely generating a group of uniformly distributed and randomized two-dimensional offset vectors in the unit disc; Step six, executing self-adaptive correction of the offset direction, namely screening and correcting all sampling offset vectors according to the normal direction of the equipment space; step seven, synthesizing final sampling texture coordinates, namely superposing the corrected offset onto the reference texture coordinates to form a group of coordinates for actually sampling the shadow map; And step eight, calculating an average soft shadow value, namely obtaining a final sampling coordinate based on the generated special sampling offset vector, further calculating a smooth soft shadow value through multiple sampling, and finally synthesizing the pixel color with the vivid soft shadow. Preferably, the world space position and normal are acquired in the first step, namely, three-dimensional space coordinates of each pixel or segment in the rendering pipeline are acquired through vertex shader interpolation or screen space reconstruction technol