KR-20260067946-A - DEVICE AND RENDERING METHOD INCLUDING LIGHT SOURCE RESTORATION
Abstract
The present disclosure provides an electronic device. The electronic device comprises at least one processor including a camera; a display; a memory; and a processing circuit; When the above memory is executed individually or collectively by the at least one processor, the electronic device enables: to acquire a plurality of two-dimensional images captured at a plurality of viewpoints for a first scene including a first light source using the camera; to generate a color set including some of the plurality of colors included in the plurality of two-dimensional images; to predict the structure and radiance of the 3D space for the first scene through volume rendering of the plurality of two-dimensional images; to select a first color among the color set as a candidate color for the light source; to predict the weight of the color set and the object's intrinsic color at each pixel in the 3D space through volume rendering; to predict the scene components of the 3D space through physically based rendering of the plurality of two-dimensional images; and if the error between the pixel values of the first scene and the plurality of two-dimensional images resulting from the physically based rendering is within a first threshold value, to identify that the first color is the color of the first light source, and the physical When the error between the pixel values of the first scene and the plurality of two-dimensional images resulting from the base rendering is greater than the first threshold, a candidate light source color is selected again from the set of colors, and when the physically based rendering is completed, an image of the first viewpoint in the 3D space can be generated.
Inventors
- 이보현
- 구준서
- 김건희
- 김현수
- 정진서
- 최인호
Assignees
- 삼성전자주식회사
- 서울대학교산학협력단
Dates
- Publication Date
- 20260513
- Application Date
- 20250121
- Priority Date
- 20241106
Claims (20)
- In electronic devices, camera; display; Memory; and It includes at least one processor including a processing circuit; When the above memory is executed individually or collectively by the at least one processor, the electronic device: Using the above camera, a plurality of two-dimensional images are obtained that are captured at a plurality of viewpoints of a first scene including a first light source, and Generating a color set including some of the multiple colors included in the plurality of two-dimensional images above, and Predicting the structure of the 3D space and the radiance of the first scene through volume rendering of the plurality of 2D images, and Select a first color from the above set of colors as a candidate color for the light source, and Predicting the weights of the color set and the object's intrinsic color at each pixel in the 3D space through volume rendering, Predicting scene components in the 3D space through physically based rendering of the plurality of 2D images, and If the error between the pixel values of the first scene and the plurality of two-dimensional images in the above physically based rendering result is within a first threshold value, the first color is identified as the color of the first light source, and If the error between the pixel values of the first scene and the plurality of two-dimensional images in the physically based rendering result is greater than the first threshold, a light source candidate color is re-selected from the set of colors, and An electronic device that stores instructions causing the generation of an image of a first viewpoint in the 3D space when the above-mentioned physical-based rendering is completed.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: An electronic device storing instructions that cause the above color set to include colors corresponding to each vertex of a three-dimensional polygon (convex hull) formed by connecting the outermost boundary points among the points corresponding to multiple colors included in the plurality of two-dimensional images in a color space composed of RGB values.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: Among the multiple colors included in the above color set, the weights for colors other than the color designated as the light source candidate color are initialized and updated as discrete values, and An electronic device that stores instructions causing the weights for the colors designated as the above light source candidate colors to be initialized and updated as continuous values.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: An electronic device that stores instructions that cause the weights for colors other than the color designated as the light source candidate color to be updated by selecting the closest value among the determined discrete values while predicting the weights of the color set through the volume rendering above.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: An electronic device storing instructions that cause volume rendering for predicting the weights of the color set and the intrinsic color of the object to be repeated until the difference between the result of the volume rendering and the plurality of two-dimensional images reaches within a predetermined error.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: An electronic device that stores instructions causing the color designated as the light source candidate color to reflect the color, weight, and the color of the color set and the unique color of the object while predicting the unique color of the object through the volume rendering above.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: An electronic device that enables the scene components of the above 3D space to include the degree of luminescence of an object, its material, and the light of the surrounding environment.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: The above-described physically based rendering is an electronic device that stores instructions causing the prediction of scene components in the 3D space using a surface property prediction model and an albedo prediction model of the 3D space.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: An electronic device that stores commands that cause an emphasis effect to be displayed on the first light source of the image at the first time point and output through the display.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: Receiving touch or drag user input on the screen displaying the image at the first time point, and An electronic device that stores commands causing to output a rendering screen for a changed point in time based on the above touch or drag user input.
- In paragraph 1, When the above memory is executed individually or collectively by the above at least one processor, the electronic device: Receiving user input regarding the first light source included in the image of the first time point output through the display, Reflecting light source properties changed by the above user input through physically based rendering, and The above-mentioned physical-based rendering result image is output through the above-mentioned display, and The above user input is an electronic device that stores commands to change at least one of the color, intensity, position, size, or shape of a light source.
- In the method of an electronic device, The operation of acquiring a plurality of two-dimensional images captured at a plurality of viewpoints for a first scene including a first light source; The operation of generating a color set including some of the plurality of colors included in the plurality of two-dimensional images above; An operation of predicting the structure of the 3D space and the radiance of the first scene through volume rendering of the plurality of 2D images; The operation of selecting a first color among the above color set as a light source candidate color; An operation to predict the weight of the color set and the object's intrinsic color at each pixel in the 3D space through volume rendering; An operation of predicting scene components in the 3D space through physically based rendering of the plurality of 2D images; An operation to identify that the first color is the color of the first light source when the error between the pixel values of the first scene and the plurality of two-dimensional images in the physically based rendering result is within a first threshold value; If the error between the pixel values of the first scene and the plurality of two-dimensional images in the physically based rendering result is greater than the first threshold, the operation of returning to the operation of selecting a light source candidate color from the set of colors; and A method comprising the operation of generating an image of a first viewpoint in the 3D space when the above-mentioned physically based rendering is completed.
- In Paragraph 12, The operation of generating a color set including some of the plurality of colors included in the plurality of two-dimensional images above is, A method comprising the operation of generating a color set including colors corresponding to each vertex of a three-dimensional polygon (convex hull) formed by connecting the outermost boundary points among the points corresponding to multiple colors included in the plurality of two-dimensional images in a color space composed of RGB values.
- In Paragraph 12, The operation of predicting the weights of the color set and the object's intrinsic color at each pixel in the 3D space through the volume rendering above is, The operation of initializing and updating the weights for colors other than the color designated as the light source candidate color among the plurality of colors included in the above color set to discrete values; and A method comprising the operation of initializing and updating the weights for the colors designated as the light source candidate colors as continuous values.
- In Paragraph 12, The operation of predicting the weights of the color set and the object's intrinsic color at each pixel in the 3D space through the volume rendering above is, A method comprising the operation of updating the weight for a color other than the color designated as the light source candidate color by selecting the closest value among the defined discrete values.
- In Paragraph 12, The operation of predicting the weights of the color set and the object's intrinsic color at each pixel in the 3D space through the volume rendering above is, A method comprising an action in which the color designated as the light source candidate color reflects the color of the color set, weights, and the unique color of the object.
- In Paragraph 12, A method for making the scene components of the above 3D space include the degree of luminescence of an object, a material, and light of the surrounding environment.
- In Paragraph 12, A method in which the above-described physically based rendering predicts scene components in the 3D space using a surface property prediction model and an albedo prediction model of the 3D space.
- In Paragraph 12, An operation of displaying an emphasis effect on the first light source of the image at the first time point and outputting it through a display; An operation to receive touch or drag user input on a screen displaying an image at the first time point; and A method further comprising the action of outputting a rendering screen for a changed point in time based on the above touch or drag user input.
- As a non-volatile computer-readable storage medium that records instructions, When the above instructions are executed by one or more processors, the one or more processors: A step of acquiring a plurality of two-dimensional images taken at a plurality of viewpoints for a first scene including a first light source; A step of generating a color set including some of the plurality of colors included in the plurality of two-dimensional images; A step of predicting the structure of the 3D space and the radiance of the first scene through volume rendering of the plurality of 2D images; A step of selecting a first color among the above color set as a light source candidate color; A step of predicting the weight of the color set and the object's intrinsic color at each pixel in the 3D space through volume rendering; A step of predicting scene components in the 3D space through physically based rendering of the plurality of 2D images; A step of identifying that the first color is the color of the first light source when the error between the pixel values of the first scene and the plurality of two-dimensional images in the physically based rendering result is within a first threshold value; A step of returning to the operation of selecting a light source candidate color from the set of colors when the error between the pixel values of the first scene and the plurality of two-dimensional images in the physically based rendering result is greater than the first threshold; and A non-transient computer-readable storage medium that performs the step of generating an image of a first viewpoint in the 3D space when the above-mentioned physical-based rendering is completed.
Description
Device and rendering method including light source restoration Various embodiments of the present disclosure relate to a rendering method and apparatus including light source restoration. Rendering is the process of converting 2D images or animations into 3D models or scenes. This process plays a crucial role in computer graphics and is used in both real-time applications (e.g., games) and non-real-time applications such as movies and animations. Rendering primarily involves methods to realistically represent a scene by calculating various elements, such as lighting sources, camera angles, and material properties. Rendering techniques include, for example, rasterization, ray tracing, and path tracing. Each method can be selected based on a balance between speed and quality. For instance, while ray tracing enables realistic lighting effects, its high computational cost limited its use in real-time environments; however, recent hardware advancements have led to its increasing utilization in games. In addition, neural network-based rendering technology has recently been receiving significant attention. This technology focuses on reproducing scenes more efficiently and realistically by utilizing deep learning. A representative example is Neural Radiance Fields (NeRF). NeRF reconstructs 3D scenes using only a few photos and allows for free movement of the viewpoint. NeRF provides high-quality results with less data and computation than conventional rendering methods and is being applied in various fields such as film, virtual reality (VR), and digital twins. The information described above may be provided as related art for the purpose of aiding understanding of the present disclosure. No claim or determination is made as to whether any of the foregoing may be applied as prior art related to the present disclosure. In relation to the description of the drawings, the same or similar reference numerals may be used for identical or similar components. FIG. 1 is a block diagram of an electronic device in a network environment according to various embodiments. FIG. 2 is an example of 3D rendering for a single light source multi-view input image of an electronic device according to one embodiment. FIG. 3 is a flowchart illustrating a rendering operation including the restoration of a light source of an electronic device according to one embodiment. FIG. 4 is a flowchart illustrating the operation of an electronic device generating a set of colors according to one embodiment of the present disclosure. FIG. 5 is a flowchart illustrating the operation of an electronic device restoring a light source using a set of colors according to one embodiment of the present disclosure. FIGS. 6a and 6b are flowcharts illustrating detailed operations of an electronic device restoring a light source according to one embodiment of the present disclosure. FIG. 7 is a flowchart illustrating the operation of an electronic device modifying a light source of a rendering screen according to user input, according to one embodiment of the present disclosure. FIGS. 8a and 8b are examples of lighting editing screens according to one embodiment of the present disclosure. Hereinafter, embodiments of the present disclosure are described in detail with reference to the drawings so that those skilled in the art can easily practice them. However, the present disclosure may be embodied in various different forms and is not limited to the embodiments described herein. In relation to the description of the drawings, the same or similar reference numerals may be used for identical or similar components. Furthermore, in the drawings and related descriptions, descriptions of well-known functions and configurations may be omitted for clarity and brevity. An embodiment of the present disclosure will be described below with reference to the attached drawings. FIG. 1 is a block diagram of an electronic device in a network environment according to various embodiments. Referring to FIG. 1, in a network environment (100), an electronic device (101) may communicate with an electronic device (102) through a first network (198) (e.g., a short-range wireless communication network) or with at least one of an electronic device (104) or a server (108) through a second network (199) (e.g., a long-range wireless communication network). According to one embodiment, the electronic device (101) may communicate with the electronic device (104) through a server (108). According to one embodiment, the electronic device (101) may include a processor (120), memory (130), input module (150), sound output module (155), display module (160), audio module (170), sensor module (176), interface (177), connection terminal (178), haptic module (179), camera module (180), power management module (188), battery (189), communication module (190), subscriber identification module (196), or antenna module (197). In some embodiments, at least one of these components (e.g., connection terminal (