Search

EP-4742653-A1 - ELECTRONIC DEVICE AND CONTROL METHOD THEREFOR

EP4742653A1EP 4742653 A1EP4742653 A1EP 4742653A1EP-4742653-A1

Abstract

Disclosed is an electronic apparatus including: a display; at least one camera; a memory storing instructions; and at least one processor, comprising processing circuitry, wherein at least one processor is configured to execute the instructions, and to cause the electronic apparatus to: display a three-dimensional (3D) image including an object positioned in a 3D virtual space through the display, identify movement information corresponding to at least one of a user head or eyes in a 3D space where a user is positioned based on a captured image acquired by the camera, identify position movement information corresponding to the 3D virtual space based on the movement information, and control the display to display the object by changing the display position and depth of the object within the 3D virtual space included in the 3D image based on the position movement information.

Inventors

  • YOON, Daewon
  • CHO, Seungki

Assignees

  • Samsung Electronics Co., Ltd.

Dates

Publication Date
20260513
Application Date
20241205

Claims (15)

  1. An electronic apparatus comprising: a display; at least one camera; a memory storing instructions; and at least one processor, comprising processing circuitry; wherein at least one processor, individually and/or collectively, is configured to execute the instructions, and to cause the electronic apparatus to: display a three-dimensional (3D) image including an object positioned in a 3D virtual space through the display, identify movement information corresponding to at least one of a user head or eyes in a 3D space where a user is positioned based on a captured image acquired by the camera, identify position movement information corresponding to the 3D virtual space based on the movement information, and control the display to display the object by changing the display position and depth of the object within the 3D virtual space included in the 3D image based on the position movement information.
  2. The apparatus as claimed in claim 1, wherein at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to: identify first movement distance information and first movement direction information corresponding to the position movement information, and control the display to display the object by changing the display position and depth of the object in the 3D virtual space based on the first movement distance information and the first movement direction information.
  3. The apparatus as claimed in claim 2, wherein at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to: identify second movement distance information corresponding to a difference between a first position and a second position and second movement direction information from the first position to the second position based on a position of the at least one of the user head or eyes in the captured image acquired by the camera being changed from the first position to the second position, and identify the first movement distance information and the first movement direction information based on the second movement distance information and the second movement direction information.
  4. The apparatus as claimed in claim 3, wherein at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to identify the first movement distance information and the first movement direction information by scaling the second movement distance information and the second movement direction information to correspond to the 3D virtual space.
  5. The apparatus as claimed in claim 3, wherein at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to: identify the second movement distance information based on three-axis coordinate values corresponding to the first position and three-axis coordinate values corresponding to the second position, and identify the second movement direction information based on three-axis angular velocity values from the first position to the second position.
  6. The apparatus as claimed in claim 5, wherein at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to: acquire a vector value based on the three-axis coordinate values and the three-axis angular velocity values, and identify the second movement distance information and the second movement direction information corresponding to a relative movement of the at least one of the user head or eyes based on the acquired vector value.
  7. The apparatus as claimed in claim 5, wherein three-axis coordinate values corresponding to a first position and three-axis coordinate values corresponding to a second position each include x, y, and z coordinates in an XYZ space, and three-axis angular velocity values include roll, pitch, and yaw values.
  8. The apparatus as claimed in claim 1, wherein the 3D image includes the plurality of objects positioned in the 3D virtual space, and at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to: identify a moving target object among the plurality of objects based on a current position of the at least one of the user head or eyes, or identify the moving target object among the plurality of objects based on a user selection command.
  9. The apparatus as claimed in claim 1, wherein the camera includes a plurality of cameras spaced apart from each other by a specified distance, and at least one processor, individually and/or collectively, is configured to cause the electronic apparatus to: identify disparity information based on first and second captured images acquired by the plurality of cameras, and identify the movement information corresponding to the at least one of the user head or eyes in the 3D space where the user is positioned based on the disparity information.
  10. The apparatus as claimed in claim 1, wherein the display includes a light field display (LFD).
  11. A method of controlling an electronic apparatus, the method comprising: displaying a three-dimensional (3D) image including an object positioned in a 3D virtual space; identifying movement information corresponding to at least one of a user head or eyes in a 3D space where a user is positioned based on a captured image acquired by a camera; identifying position movement information corresponding to the 3D virtual space based on the movement information; and displaying the object by changing the display position and depth of the object within the 3D virtual space included in the 3D image based on the position movement information.
  12. The method as claimed in claim 11, wherein the displaying of the object by changing the display position and depth of the object includes: identifying first movement distance information and first movement direction information corresponding to the position movement information, and displaying the object by changing the display position and depth of the object in the 3D virtual space based on the first movement distance information and the first movement direction information.
  13. The method as claimed in claim 12, wherein in the identifying of the position movement information corresponding to the 3D virtual space, second movement distance information corresponding to a difference between a first position and a second position is identified and second movement direction information from the first position to the second position is identified based on a position of the at least one of the user head or eyes in the captured image acquired by the camera being changed from the first position to the second position, and in the displaying of the object by changing the display position and depth of the object, the first movement distance information and the first movement direction information are identified based on the second movement distance information and the second movement direction information.
  14. The method as claimed in claim 13, wherein the displaying of the object by changing the display position and depth of the object further includes identifying the first movement distance information and the first movement direction information by scaling the second movement distance information and the second movement direction information to correspond to the 3D virtual space.
  15. A non-transitory computer-readable medium which stores a computer instruction for causing an electronic apparatus to perform an operation when executed by at least one processor, comprising processing circuitry, individually and/or collectively, of the electronic apparatus, wherein the operation includes: displaying a three-dimensional (3D) image including an object positioned in a 3D virtual space, identifying movement information corresponding to at least one of a user head or eyes in a 3D space where a user is positioned based on a captured image acquired by a camera, identifying position movement information corresponding to the 3D virtual space based on the movement information, and displaying the object by changing the display position and depth of the object within the 3D virtual space included in the 3D image based on the position movement information.

Description

[Technical Field The present disclosure relates to an electronic apparatus and a control method thereof, and for example, to an electronic apparatus that provides a three-dimensional (3D) image, and a control method thereof. [Background Art] Various types of electronic devices have been developed and supplied in accordance with the development of electronic technology. For example, display devices used in various places such as homes, offices, public places have been continuously developed over the recent years. Stereoscopy refers to three-dimensional (3D) technology. A 3D display, which is currently being commercialized, is mainly implemented using a binocular parallax method. The binocular parallax method may provide a three-dimensional effect on a single screen such as a television (TV) or theater screen. The binocular parallax method may be classified into a glasses method (stereoscopy) that uses an auxiliary device such as glasses and a glasses-free method (autostereocopy). Continuous research has been recently conducted on the commercialization of a glasses-free light field display and a glasses-free 3D display using eye-tracking. [Disclosure] [Technical Solution] According to example embodiments of the present disclosure, provided is an electronic apparatus including: a display; at least one camera; a memory storing instructions; and at least one processor, comprising processing circuitry, wherein at least one processor, individually and/or collectively, is configured to execute the instructions, and to cause the electronic apparatus to: display a three-dimensional (3D) image including an object positioned in a 3D virtual space through the display, identify movement information corresponding to at least one of a user head or eyes in a 3D space where a user is positioned based on a captured image acquired by the camera, identify position movement information corresponding to the 3D virtual space based on the movement information, and control the display to display the object by changing the display position and depth of the object within the 3D virtual space included in the 3D image based on the position movement information. At least one processor, individually and/or collectively, may be configured to cause the electronic apparatus to: identify first movement distance information and first movement direction information corresponding to the position movement information, and control the display to display the object by changing the display position and depth of the object in the 3D virtual space based on the first movement distance information and the first movement direction information. At least one processor, individually and/or collectively, may be configured to cause the electronic apparatus to: identify second movement distance information corresponding to a difference between a first position and a second position and second movement direction information from the first position to the second position based on a position of the at least one of the user head or eyes in the captured image acquired by the camera being changed from the first position to the second position, and identify the first movement distance information and the first movement direction information based on the second movement distance information and the second movement direction information. At least one processor, individually and/or collectively, may be configured to cause the electronic apparatus to: identify the first movement distance information and the first movement direction information by scaling the second movement distance information and the second movement direction information to correspond to the 3D virtual space. At least one processor, individually and/or collectively, may be configured to cause the electronic apparatus to: identify the second movement distance information based on three-axis coordinate values corresponding to the first position and three-axis coordinate values corresponding to the second position, and identify the second movement direction information based on three-axis angular velocity values from the first position to the second position. At least one processor, individually and/or collectively, may be configured to cause the electronic apparatus to: acquire a vector value based on the three-axis coordinate values and the three-axis angular velocity values, and identify the second movement distance information and the second movement direction information corresponding to a relative movement of the at least one of the user head or eyes based on the acquired vector value. Three-axis coordinate values corresponding to a first position and three-axis coordinate values corresponding to a second position may each include x, y, and z coordinates in an XYZ space, and three-axis angular velocity values may include roll, pitch, and yaw values. The 3D image may include the plurality of objects positioned in the 3D virtual space, and at least one processor, individually and/or collectively, may