Search

US-12626481-B2 - Light field rendering

US12626481B2US 12626481 B2US12626481 B2US 12626481B2US-12626481-B2

Abstract

A method for light field rendering includes: obtaining an n-dimensional mixture model, with n a natural number equal to or larger than 4, of a light field. The model is made of kernels wherein each kernel represents light information and is expressed by parameter values; mathematically reducing the n-dimensional mixture model into a 2-dimensional mixture model of an image given a certain point of view, wherein the 2-dimensional model is also made of kernels; rendering a view in a pixel domain from the 2-dimensional model made of kernels.

Inventors

  • Martijn COURTEAUX
  • Glenn Van Wallendael
  • Peter Lambert

Assignees

  • UNIVERSITEIT GENT
  • IMEC VZW

Dates

Publication Date
20260512
Application Date
20210505
Priority Date
20200506

Claims (15)

  1. 1 . A method for light field rendering, the method comprising: obtaining an n-dimensional mixture model, with n a natural number equal to or larger than 4, of a light field, wherein the model is made of kernels, wherein each kernel represents light information and is expressed by parameter values, mathematically reducing the n-dimensional mixture model into a 2-dimensional mixture model of an image, the 2-dimensional mixture model comprising a mapping of the n-dimensional mixture model onto a 2-dimensional surface associated with a selected viewpoint, wherein the 2-dimensional model is also made of kernels, rendering a view in a pixel domain from the 2-dimensional model made of kernels.
  2. 2 . The method according to claim 1 , wherein rendering the view in the pixel domain is done pixel-wise.
  3. 3 . The method according to claim 1 , wherein rendering the view in the pixel domain is done kernel-wise.
  4. 4 . The method according to claim 1 wherein kernel parameters describe orientation, and/or location, and/or depth, and/or color information, and/or size.
  5. 5 . The method according to claim 1 the method comprising editing the light field by adjusting one or more kernel parameters of at least one kernel.
  6. 6 . The method according to claim 1 , the method comprising selecting kernels based on their kernel parameters and/or derivates of the kernel parameters.
  7. 7 . The method according to claim 6 wherein editing the light field is done on the selected kernels.
  8. 8 . The method according to claim 6 wherein rendering a view in the pixel domain from the 2-dimensional model made of kernels is done in function of the selected kernels.
  9. 9 . A non-transitory computer readable memory comprising a computer program product for, if implemented on a processing unit, performing the method of claim 1 .
  10. 10 . A device for light field rendering, the device comprising: an interface for obtaining an n-dimensional mixture model, with n a natural number equal to or larger than 4, of a light field, wherein the model is a model made of kernels wherein each kernel represents light information and is expressed by parameter values, a processing unit configured for mathematically reducing the n-dimensional model into a 2-dimensional mixture model, the 2-dimensional mixture model comprising a mapping of the n-dimensional mixture model onto a 2-dimensional surface associated with a selected viewpoint, wherein the 2-dimensional model is also made of kernels, a processing unit configured for rendering a view in the pixel domain from the 2-dimensional model made of kernels.
  11. 11 . The device according to claim 10 , wherein the processing unit for rendering is configured for pixel-wise rendering of the view in the pixel-domain.
  12. 12 . The device according to claim 10 , wherein the processing unit for rendering is configured for kernel-wise rendering of the view in the pixel-domain.
  13. 13 . The device according to claim 10 , wherein a processor unit is configured for editing the light field by adjusting one or more kernel parameters of at least one kernel.
  14. 14 . The device according to claim 13 , wherein a processing unit is configured selecting kernels based on their kernel parameters and/or derivatives of the kernel parameters.
  15. 15 . The device according to claim 14 , wherein the processor unit which is configured for editing the light field is configured for editing the light field on the selected kernels.

Description

FIELD OF THE INVENTION The invention relates to the field of light field rendering. More specifically it relates to the field of light field rendering based on kernel-based light field models. BACKGROUND OF THE INVENTION A light field (LF) is a mathematical concept in the form of a vector field that represents the color of incident light rays. It thus characterizes the light information in a region. Light field rendering is the process of rendering views from an arbitrary vantage point based on camera-captured data. The rendering consists of positioning a virtual camera and virtually capturing these light rays as if it was taking a picture from that position, as shown in FIG. 1. Using the pipeline shown in FIG. 2, these views can be rendered with added functionality. Granular depth maps 22 can be constructed from the raw LF data 21 by estimation on pixel data 11 and can be used for granular depth-based filtering, e.g. removing background or changing pixel values based on their depth. Editing 12 of the raw LF data (LF) can be done on a view per view basis. Light field rendering 23 can be done using the depth map 22 and the raw LF data 21. Thus a rendered view 24 can be obtained. Virtual apertures can be simulated combined with changing the virtual focal plane in order to achieve depth-of-field effects at render time (A. Isaksen, L. McMillan, and S. J. Gortler, “Dynamically reparameterized light fields,” in Proceedings of the 27th annual conference on Computer graphics and interactive techniques—SIGGRAPH '00, 2000, pp. 297-306). Some problems with working on discrete data are: hole filling and interpolation necessary, all relevant views in memory, ghosting artefacts. In practice, light fields can be represented by an extremely dense set of 2-D images, e.g. subaperture images or views. Light field rendering then consists of merging subsets of pixels from several captured views. These pixels correspond to light rays under a certain angle. Additionally, kernel-based methods have been proposed that create a continuous statistical model of the light field data, e.g. Steered Mixture-of-Experts (R. Verhack, T. Sikora, G. Van Wallendael, and P. Lambert, “Steered Mixture-of-Experts for Light Field Images and Video: Representation and Coding,” IEEE Transactions on Multimedia, Volume 22, Issue 3, March 2020). Such models provide a continuous representation, which implicitly solves interpolation, hole-filling and ghost-artefacts during rendering. The kernels are seen as multidimensional image atoms and as coherent bundles of light, typically corresponding to a part of an entity in a scene. The parameters of the model are: the number of kernels and the parameters for each kernel. They have a shape which spans along all coordinate dimensions in which the light field is represented. These kernels describe the color information in this region in the coordinate domain. This color information per kernel is in the form of a function which maps the coordinate space onto the color space. This elevates some of the problems of working with discrete sample data, e.g. interpolation between multiple views. Furthermore, continuous depth-information is implicitly embedded in the model. Therefore, the light field rendering pipeline can be solely dependent on such a kernel-based light field model. The light field rendering step remains relatively computationally heavy, and memory demanding, as the light field rendering algorithms operate on pixel data, and thus have to reconstruct the pixel data of the original views. When, moreover, working with the raw LF data, i.e. in the pixel domain, it is required to know which pixels belong together. Furthermore, after an edit, the model needs to be rebuilt. Editing light fields is cumbersome as results needs to be consistent over all possible views. There is therefore a need for methods for light filed rendering which are less computationally heavy and/or which require less memory. SUMMARY OF THE INVENTION It is an object of embodiments of the present invention to provide a good method and device for light field rendering. The above objective is accomplished by a method and device according to the present invention. In a first aspect embodiments of the present invention relate to a method for light field rendering. The method comprises obtaining an n-dimensional mixture model of a light field, with n a natural number equal to or larger than 4, wherein the model is made of kernels wherein each kernel represents light information and is expressed by parameter values, mathematically reducing the n-dimensional mixture model into a 2-dimensional mixture model of an image given a certain point of view, wherein the 2-dimensional model is also made of kernels,rendering a view in a pixel domain from the 2-dimensional model made of kernels. It is an advantage of embodiments of the present invention that rendering is simplified by calculating a 2-dimensional model of an image given a certain point of view. T