Search

EP-4116934-B1 - SYSTEMS AND METHODS FOR RENDERING A VIRTUAL ENVIRONMENT USING LIGHT PROBES

EP4116934B1EP 4116934 B1EP4116934 B1EP 4116934B1EP-4116934-B1

Inventors

  • MESTER, Yuping Zhang
  • COWLES, Jeremy Weston
  • TATARCHUK, NATALYA

Dates

Publication Date
20260513
Application Date
20220707

Claims (15)

  1. A system comprising: one or more computer processors (710, 712, 714); one or more computer memories (730, 732, 734, 736); a set of instructions (716) incorporated into the one or more computer memories (730), the set of instructions configuring the one or more computer processors (710) to perform operations, the operations comprising: accessing (106) a noisy lighting data representation in a data structure associated with a light probe (304) in a set of light probes in an environment, the noisy lighting data representation including lighting data below a configurable amount of light information density and/or accuracy; providing (108) the noisy lighting data representation as an input to a neural network, the neural network trained to output an estimate of a denoised lighting data representation based on the input, the denoised lighting data representation including lighting data above a configurable amount of light information density and/or accuracy; and replacing (110) the noisy lighting data representation in the data structure with the estimated denoised lighting data representation.
  2. The system of claim 1, the operations further comprising one or more of the following: providing depth estimations between the light probe (304) and a nearest surface of an object (302) in the environment as an additional input to the trained neural network, and wherein the trained neural network is further trained to output the estimate of the denoised lighting data representation based on the additional input; and performing the training of the neural network to output the estimate, the performing of the training including providing a plurality of different noisy inputs for a high accuracy target, the high accuracy target based on a ground truth lighting data representation.
  3. The system of claim 2, wherein the noisy inputs are associated with a plurality of probe configurations, each probe configuration including a different number of probes or a different distribution of the probes; and/or wherein the noisy lighting data representation is projected into the data structure from an additional data structure for ease of computation during rendering of the light probe.
  4. The system of claim 3, wherein the data structure is a compute buffer.
  5. The system of claim 3, wherein the projecting of the noisy lighting data representation into the data structure from the additional data structure includes projecting the traced paths or validity and distance estimation values of a noisy coefficient onto a spherical harmonic space or a spherical gaussian space.
  6. A computer-readable storage medium or an electrical or electro-magnetic signal storing a set of instructions that, when executed by one or more computer processors, cause the one or more computer processors to perform operations, the operations comprising: accessing a noisy lighting data representation in a data structure associated with a light probe in a set of light probes in an environment, the noisy lighting data representation including lighting data below a configurable amount of light information density and/or accuracy; providing the noisy lighting data representation as an input to a neural network, the neural network trained to output an estimate of a denoised lighting data representation based on the input, the denoised lighting data representation including lighting data above a configurable amount of light information density and/or accuracy; and replacing the noisy lighting data representation in the data structure with the estimated denoised lighting data representation.
  7. The computer-readable storage medium or electrical or electro-magnetic signal of claim 6, the operations further comprising one or more of the following: providing depth estimations between the light probe and a nearest surface of an object in the environment as an additional input to the trained neural network, and wherein the trained neural network is further trained to output the estimate of the denoised lighting data representation based on the additional input; and performing the training of the neural network to output the estimate, the performing of the training including providing a plurality of different noisy inputs for a high accuracy target, the high accuracy target based on a ground truth lighting data representation.
  8. The computer-readable storage medium or electrical or electro-magnetic signal of claim 7, wherein the noisy inputs are associated with a plurality of probe configurations, each probe configuration including a different number of probes or a different distribution of the probes; and/or wherein the noisy lighting data representation is projected into the data structure from an additional data structure for ease of computation during rendering of the light probe.
  9. The computer-readable storage medium or electrical or electro-magnetic signal of claim 8, wherein the data structure is a compute buffer.
  10. The computer-readable storage medium or electrical or electro-magnetic signal of claim 8, wherein the projecting of the noisy lighting data representation into the data structure from the additional data structure includes projecting the traced paths or validity and distance estimation values of a noisy coefficient onto a spherical harmonic space or a spherical gaussian space.
  11. A method comprising: accessing a noisy lighting data representation in a data structure associated with a light probe in a set of light probes in an environment, the noisy lighting data representation including lighting data below a configurable amount of light information density and/or accuracy; providing the noisy lighting data representation as an input to a neural network, the neural network trained to output an estimate of denoised lighting data representation based on the input, the denoised lighting data representation including lighting data above a configurable amount of light information density and/or accuracy; and replacing the noisy lighting data representation in the data structure with the estimated denoised lighting data representation.
  12. The method of claim 11, further comprising one or more of the following: providing depth estimations between the light probe and a nearest surface of an object in the environment as an additional input to the trained neural network, and wherein the trained neural network is further trained to output the estimate of the denoised lighting data representation based on the additional input; and performing the training of the neural network to output the estimate, the performing of the training including providing a plurality of different noisy inputs for a high accuracy target, the high accuracy target based on a ground truth lighting data representation.
  13. The method of claim 12, wherein the noisy inputs are associated with a plurality of probe configurations, each probe configuration including a different number of probes or a different distribution of the probes.
  14. The method of claim 11, wherein the noisy lighting data representation is projected into the data structure from an additional data structure for ease of computation during rendering of the light probe.
  15. The method of claim 14, wherein the data structure is a compute buffer.

Description

TECHNICAL FIELD The subject matter disclosed herein generally relates to the technical field of computer graphics systems, and in one specific example, to computer graphics systems and methods for rendering graphics using neural networks and light probes. BACKGROUND OF THE INVENTION Placing light probes in a virtual scene allows for the capture and use of light information passing through the probes within an empty space in the scene. At a subsequent time, the captured light information may be used by a rendering pipeline to improve a rendering of the scene. For example, based on the scene including a moving object, the captured light information stored in the light probes may be used to determine an approximation of light bouncing around the scene based on a position of the moving object. However, while light probes are beneficial for global illumination, existing methods are expensive; for example, they may require the pre-computation and storage of millions of light ray paths to reach a converged result. In "Sensor-realistic Synthetic Data Engine for Multiframe High Dynamic Range Photography", Hu Jinhan, et al. the authors describe a method for generating synthetic images for training neural networks that are used to enhance images captured by mobile devices. SUMMARY The invention is a system, a method and a computer-readable storage medium or electrical or electro-magnetic signal as defined in the appended claims. BRIEF DESCRIPTION OF THE DRAWINGS Features and advantages of example embodiments of the present disclosure will become apparent from the following detailed description, taken in combination with the appended drawings, in which: Fig. 1 is a flowchart of a method for creating denoised coefficients in a set of light probes, in accordance with one embodiment;Fig. 2 is a flowchart of a method for standardizing a spheric harmonic space, in accordance with one embodiment;Fig. 3 is an illustration of an object in a scene surrounded by a set of light probes, in accordance with one embodiment;Fig. 4 is an illustration of an example image rendered from noisy input lighting data, in accordance with one embodiment;Fig. 5 is an illustration of an example image rendered from denoised output lighting data, in accordance with one embodiment;Fig. 6 is a block diagram illustrating an example software architecture, which may be used in conjunction with various hardware architectures described herein; andFig. 7 is a block diagram illustrating components of a machine, according to some example embodiments, configured to read instructions from a machine-readable medium (e.g., a machine-readable storage medium) and perform any one or more of the methodologies discussed herein. DETAILED DESCRIPTION The description that follows describes example systems, methods, techniques, instruction sequences, and computing machine program products that comprise illustrative embodiments of the disclosure, individually or in combination. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide an understanding of various embodiments of the inventive subject matter. It will be evident, however, to those skilled in the art, that various embodiments of the inventive subject matter may be practiced without these specific details. The term 'content' used throughout the description herein should be understood to include all forms of media content items, including images, videos, audio, text, 3D models (e.g., including textures, materials, meshes, and more), animations, vector graphics, and the like. The term 'game' used throughout the description herein should be understood to include video games and applications that execute and present video games on a device, and applications that execute and present simulations on a device. The term 'game' should also be understood to include programming code (either source code or executable binary code) which is used to create and execute the game on a device. The term 'environment' used throughout the description herein should be understood to include 2D digital environments (e.g., 2D video game environments, 2D simulation environments, 2D content creation environments, and the like), 3D digital environments (e.g., 3D game environments, 3D simulation environments, 3D content creation environments, virtual reality environments, and the like), and augmented reality environments that include both a digital (e.g., virtual) component and a real-world component. The term 'digital object', used throughout the description herein is understood to include any object of digital nature, digital structure or digital element within an environment. A digital object can represent (e.g., in a corresponding data structure) almost anything within the environment; including 3D models (e.g., characters, weapons, scene elements (e.g., buildings, trees, cars, treasures, and the like)) with 3D model textures, backgrounds (e.g., terrain, sky, and the like),