KR-20260065399-A - ELECTRONIC DEVICE AND METHOD FOR PROVIDING 3D IMAGE
Abstract
The electronic device comprises a display configured to provide a multi-view image including a plurality of different viewpoint images, a sensor, a memory for storing instructions, and at least one processor including a processing circuit, wherein when the instructions are executed individually or collectively by at least one processor, the electronic device identifies a user's viewing position based on sensing data acquired through the sensor, identifies a viewpoint image corresponding to the user's viewing position among a plurality of different viewpoint images, acquires an output image by adjusting the brightness of the remaining viewpoint images excluding some viewpoints including the identified viewpoint image, and displays the output image on the display.
Inventors
- 이화선
- 권영준
- 김규민
- 김범수
- 박재언
- 신민재
- 심휘준
- 정구철
- 조현준
Assignees
- 삼성전자주식회사
Dates
- Publication Date
- 20260508
- Application Date
- 20241101
Claims (20)
- In electronic devices, A display configured to provide a multi-view image including multiple different viewpoint images; Sensor; Memory for storing instructions; and at least one processor including processing circuitry; and When the above instructions are executed individually or collectively by the at least one processor, the electronic device, Identifying the user's viewing location based on sensing data acquired through the above sensor, and Identifying a viewpoint image corresponding to the user's viewing position among the plurality of different viewpoint images, and An output image is obtained by adjusting the brightness of the remaining viewpoint images, excluding some viewpoints that include the identified viewpoint images above, and An electronic device that displays the above output image on the above display.
- In paragraph 1, The above multi-view image is, It is an image in which multiple images from different viewpoints are sequentially and repeatedly arranged, and When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An electronic device that adjusts the brightness of the remaining viewpoint images, excluding the identified viewpoint image above.
- In paragraph 2, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An electronic device for obtaining an output image by adjusting the brightness of the remaining viewpoint image so that the brightness of the remaining viewpoint image, excluding the identified viewpoint image, is reduced.
- In paragraph 3, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An electronic device for obtaining an output image by adjusting the brightness of the remaining viewpoint image so that the brightness gradually decreases in proportion to the distance from the identified viewpoint image.
- In paragraph 3, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, The output image is obtained by adjusting the brightness of the remaining viewpoint images so that the brightness gradually decreases based on a weighting factor that decreases based on the distance from the identified viewpoint image, and The magnitude of the reduction in the above weight is, An electronic device that increases in proportion to the distance from the identified point-time image.
- In paragraph 1, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, The images from the remaining time points mentioned above are converted from RGB color space images to YUV color space images, and After maintaining the U and V values in the above YUV color space image and adjusting the Y value, the above YUV color space image with the adjusted Y value is re-converted into an RGB color space image, and An electronic device that obtains the output image based on the above-mentioned reconverted RGB color space image.
- In paragraph 1, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An electronic device that obtains an output image by blurring the remaining viewpoint images while adjusting the brightness of some viewpoint images, excluding some viewpoint images that include the identified viewpoint images.
- In paragraph 1, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An electronic device that, when private mode is selected, adjusts the brightness of the remaining viewpoint images, excluding some viewpoint images including the identified viewpoint images, to obtain the output image.
- In paragraph 8, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, Provides a UI for setting the viewing angle, and An electronic device that, in the above private mode, identifies the remaining viewpoint images excluding the identified viewpoint image based on the viewable viewing angle set through the UI.
- In paragraph 1, The above display is, A display panel that displays a multi-view image including a plurality of different viewpoint images; and An electronic device comprising: a viewing area separation unit disposed on the front of the display panel and providing an optical view corresponding to a different time point for each viewing area.
- A control method for an electronic device comprising a display configured to provide a multi-view image including a plurality of different viewpoint images, An operation to identify the user's viewing location based on sensing data acquired through a sensor; An operation to identify a viewpoint image corresponding to the user's viewing position among a plurality of different viewpoint images; An operation to obtain an output image by adjusting the brightness of the remaining viewpoint images, excluding some viewpoint images including the identified viewpoint images; and A control method comprising an operation on the display that displays the output image.
- In Paragraph 11, The above multi-view image is, It is an image in which multiple images from different viewpoints are sequentially and repeatedly arranged, and The operation of acquiring the above output image is, A control method comprising: adjusting the brightness of the remaining viewpoint images, excluding the identified viewpoint image, and displaying them on the display.
- In Paragraph 12, The operation of acquiring the above output image is, A control method comprising: an operation of obtaining an output image by adjusting the brightness of the remaining time point image so that the brightness of the remaining time point image, excluding the identified time point image, is reduced.
- In Paragraph 13, The operation of acquiring the above output image is, A control method comprising: an operation of acquiring the output image by adjusting the brightness of the remaining viewpoint image so that the brightness is gradually reduced in proportion to the distance from the identified viewpoint image.
- In Paragraph 13, The operation of acquiring the above output image is, The operation of obtaining the output image by adjusting the brightness of the remaining viewpoint image so that the brightness gradually decreases based on a weighting factor that decreases according to the distance from the identified viewpoint image; The magnitude of the reduction in the above weight is, A control method that increases in proportion to the distance from the identified point-time image.
- In Paragraph 11, The operation of acquiring the above output image is, The operation of converting the image at the remaining time points above from an RGB color space image to a YUV color space image; The operation of maintaining the U and V values in the above YUV color space image and adjusting the Y value, and then re-converting the YUV color space image with the adjusted Y value into an RGB color space image; and A control method comprising: an operation of acquiring the output image based on the re-converted RGB color space image.
- In Paragraph 11, The operation of acquiring the above output image is, A control method comprising: an operation of obtaining an output image by blurring while adjusting the brightness of the remaining image points, excluding some image points including the identified image points;
- In Paragraph 11, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, The operation of acquiring the output image is, A control method comprising: when private mode is selected, adjusting the brightness of the remaining time point images, excluding some time point images including the identified time point images, to obtain the output image.
- In Paragraph 18, When the above instructions are executed individually or collectively by the at least one processor, the electronic device, An action that provides a UI for setting the viewable field of view; and A control method further comprising: an operation of identifying the remaining viewpoint images excluding the identified viewpoint image based on the viewable viewing angle set through the UI in the above private mode.
- A non-transient computer-readable medium storing computer instructions that cause an electronic device to perform an operation when executed by a processor of an electronic device comprising a display configured to provide a multi-view image including a plurality of different viewpoint images, The above operation is, An operation to identify the user's viewing location based on sensing data acquired through a sensor; An operation to identify a viewpoint image corresponding to the user's viewing position among a plurality of different viewpoint images; An operation to obtain an output image by adjusting the brightness of the remaining viewpoint images, excluding some viewpoint images including the identified viewpoint images; and A non-transient computer-readable medium comprising an operation on the display that displays the output image.
Description
ELECTRONIC DEVICE AND METHOD FOR PROVIDING 3D IMAGE } One or more embodiments of the present disclosure relate to electronic devices, for example, to an electronic device that renders and provides a 3D image and a method for providing a 3D image. The purpose of display technology is to convey natural three-dimensional space or virtual spatial information unfolding before the eyes to humans more accurately and realistically. To create natural and immersive images that make one feel as if they are actually in that place, 3D stereoscopic video technology is being developed in various ways. 3D display technology includes glasses-based and glasses-free types, and encompasses stereoscopic 3D displays (e.g., glasses-based, two-viewpoint), multi-viewpoint, and holographic 3D display technologies. Due to the discomfort caused by wearing 3D glasses, glasses-free technology is being researched. Conventional stereoscopic 3D displays contain two viewpoint image information points for viewing stereoscopic images. Due to limitations in the field of view, the viewpoint changes discontinuously, leading to problems such as unnatural motion parallax and focus-convergence mismatch, which cause eye strain and dizziness. A viewpoint image refers to the image seen when a user looks at a 3D display in actual physical space. To address these two issues, various methods to secure a wider field of view are being researched. From a hardware perspective, research is being conducted on multi-viewpoint methods that can provide more natural 3D effects by attempting to increase the number of viewpoints. From a software perspective, rendering methods are being researched that track the user's position and provide optimal 3D stereoscopic images based on that position by rearranging pixels appropriate to that location. The information described above may be provided as related art for the purpose of aiding understanding of the present disclosure. No claim or determination is made as to whether any of the foregoing may be applied as prior art related to the present disclosure. The above and other aspects and features of specific embodiments of the present disclosure will become more apparent from the following description taken together with the accompanying drawings. FIG. 1 is a drawing for explaining the operation of an electronic device according to one or more embodiments. FIG. 2 is a block diagram of an electronic device according to one embodiment. FIG. 3 is a drawing for explaining an example of implementing a display according to one embodiment of the present invention. FIG. 4 is a flowchart illustrating a method for providing a 3D image according to one embodiment. FIG. 5 is a drawing for explaining an example of a method for providing a 3D image according to one embodiment. FIG. 6 is a drawing for explaining a method of operation according to an example of an electronic device for providing a 3D image according to one embodiment. FIG. 7 is a drawing for explaining a method of providing an optical view according to one embodiment. FIGS. 8a to 8c are drawings for explaining a method for adjusting the brightness of an optical view according to one embodiment. The terms used in the embodiments of this disclosure have been selected to be as widely used as possible, taking into account their functions within this disclosure; however, these terms may vary depending on the intent of those skilled in the art, case law, the emergence of new technologies, etc. Additionally, in specific cases, terms have been selected at the applicant's discretion, and in such cases, their meanings will be described in detail in the description section of the disclosure. Therefore, the terms used in this disclosure should be defined based on their meanings and the overall content of this disclosure, rather than merely their names (such as analyzing calls, messages, schedules, etc.). In this specification, expressions such as “have,” “may have,” “include,” or “may include” indicate the presence of the above features (e.g., numerical values, functions, actions, or components such as parts) and do not exclude the presence of additional features. The expression "at least one of A or/and B" should be understood as representing either "A" or "B" or "A and B". Expressions such as "first," "second," "first," or "second" used in this specification may modify various components regardless of order and/or importance, and are used only to distinguish one component from another and do not limit said components. Where it is stated that a component (e.g., a first component) is "(operatively or communicatively) coupled with/to" or "connected to" another component (e.g., a second component), it should be understood that the component may be directly connected to the other component or connected through the other component (e.g., a third component). The singular expression includes the plural expression unless the context clearly indicates otherwise. In this application, terms su