Search

KR-20260066017-A - Head mounted display apparatus and operating method for the same

KR20260066017AKR 20260066017 AKR20260066017 AKR 20260066017AKR-20260066017-A

Abstract

A head-mounted display device is disclosed, comprising: an eye-tracking sensor for acquiring eye-tracking information of both eyes of a user; a depth sensor for acquiring depth information of one or more objects; and a processor for acquiring information about a gaze point based on the acquired eye-tracking information and determining measurement parameters of the depth sensor based on the information about the gaze point.

Inventors

  • 구본곤
  • 고재우
  • 이원우

Assignees

  • 삼성전자주식회사

Dates

Publication Date
20260512
Application Date
20260421
Priority Date
20190411

Claims (11)

  1. Eye tracking sensor; A depth sensor including a stereo camera and configured to acquire depth information using a stereo image (SI) method; and At least one processor; Includes, The above-mentioned at least one processor is, Using the above stereo camera, an image of at least one object in the real world is acquired, and Using the above eye tracking sensor, the gaze direction of the user's left and right eyes is obtained, and Based on the gaze direction of the user's left and right eyes, respectively, two-dimensional (2D) position information of the gaze point is obtained, and Based on the 2D position information of the above gaze point, a pre-set area of the above image is determined as a region of interest (ROI), and An electronic device that obtains depth information of at least one object within the region of interest using the aforementioned preset region.
  2. In Article 1, The depth sensor includes a first camera and a second camera, and The above-mentioned at least one processor is, A first image of the at least one object is obtained using the first camera, and a second image of the at least one object is obtained using the second camera. An electronic device that determines a preset area corresponding to the two-dimensional position information of the gaze point for a first point on the first image as a first region of interest, and determines a preset area corresponding to the two-dimensional position information of the gaze point for a second point on the second image as a second region of interest.
  3. In Article 2, The above-mentioned at least one processor is, A difference image is obtained by calculating the difference between the image of the first region of interest and the image of the second region of interest, and An electronic device that acquires depth information of at least one object within the region of interest based on the difference image above.
  4. In Article 2, The above-mentioned at least one processor is, By using a zoom function to enlarge the first region of interest and the second region of interest, an enlarged image of the region of interest is obtained, and An electronic device that acquires depth information of at least one object within the region of interest based on the enlarged image above.
  5. In Article 1, A display unit for displaying at least one object within the real world; Includes more, The above-mentioned at least one processor is, The display unit is controlled to display at least one virtual object on the region of interest based on depth information of at least one object within the region of interest, and An electronic device in which the above virtual object is displayed to overlap with the real world shown through the display unit.
  6. In the method of operating an electronic device, A step of acquiring an image of at least one object in the real world using a stereo camera included in a stereo image (SI) type depth sensor; A step of obtaining the gaze direction of the user's left and right eyes, respectively, using the eye tracking sensor of the electronic device; A step of obtaining two-dimensional (2D) position information of a gaze point based on the gaze direction of each of the left and right eyes obtained above; A step of determining a preset area of the image as a region of interest based on the two-dimensional position information of the above-mentioned gaze point; and A step of obtaining depth information of at least one object using the above-mentioned preset area; A method of operation including
  7. In Article 6, The above stereo image (SI) type depth sensor includes a first camera and a second camera, and The step of acquiring the above image is, The method includes the step of acquiring a first image of the at least one object using the first camera and acquiring a second image of the at least one object using the second camera. The step of determining the aforementioned pre-set region as the region of interest is, A method of operation comprising the step of determining a pre-set area corresponding to a first point corresponding to two-dimensional position information of the gaze point within the first image as a first region of interest, and determining a pre-set area corresponding to a second point corresponding to two-dimensional position information of the gaze point within the second image as a second region of interest.
  8. In Article 7, The step of obtaining depth information of at least one object within the region of interest is A step of obtaining a difference image by calculating the difference between the image of the first region of interest and the image of the second region of interest; and A step of obtaining depth information of at least one object within the region of interest based on the difference image obtained above; A method of operation including
  9. In Article 8, The step of obtaining depth information of at least one object within the region of interest is A step of obtaining an enlarged image of the region of interest by enlarging the first region of interest and the second region of interest using a zoom function; and A step of obtaining depth information of at least one object within the region of interest based on the enlarged image above; A method of operation including
  10. In Article 6, A step of displaying at least one virtual object on the region of interest based on depth information of at least one object within the region of interest; Includes more, A method of operation in which the above virtual object is displayed to overlap with the real world shown through the display unit of the electronic device.
  11. A computer-readable recording medium having a program for executing any one of the operation methods of paragraphs 6 through 10 on a computer.

Description

Head-mounted display apparatus and operating method for the same Various embodiments relate to a head-mounted display device and a method of operation thereof that determines a gaze point in actual space and acquires depth information using parameters optimized for the gaze point. The real space we live in is composed of three-dimensional coordinates. Humans perceive a three-dimensional space with a sense of depth by combining visual information seen with both eyes. However, photos or videos taken with general digital devices use technology that represents three-dimensional coordinates as two-dimensional coordinates, so they do not include information about the space. To express this sense of space, 3D camera or display products are emerging that use two cameras together to capture and show images with a sense of depth. Meanwhile, to express a sense of spatial depth, depth information regarding the real world must be sensed. Conventional depth sensing has performed depth sensing across the entire range of space measurable by the depth sensor, without considering the user's area of interest. In particular, for depth sensors that perform depth sensing by projecting light, IR LEDs must be driven to project light (e.g., infrared light) across the entire space, leading to increased power consumption. Furthermore, acquiring depth information for the entire space requires increased computational load, which in turn increases power consumption. As power consumption of depth sensors increases, there is a problem in that it is difficult to integrate them into small devices. In addition, conventional depth sensing methods have the problem of low depth sensing accuracy due to the vulnerabilities of each depth sensor. FIG. 1 is a drawing showing an electronic device according to one embodiment. FIGS. 2 to 3d are drawings referenced to explain a method for an electronic device according to one embodiment to track a user's gaze. FIGS. 4a and 4b are drawings illustrating a method for an electronic device according to one embodiment to acquire depth information about a gaze point. FIGS. 5A and 5B are drawings for explaining a method in which an electronic device according to one embodiment determines measurement parameters of a depth sensor based on a user's gaze point. FIGS. 6 to 7b are drawings referenced to explain a method for an electronic device according to one embodiment to determine measurement parameters when the depth sensor according to one embodiment is of the TOF type. FIGS. 8 and 9 are drawings referenced to explain how an electronic device according to one embodiment determines measurement parameters when the depth sensor according to one embodiment is a structured optical (SL) method. FIG. 10 is a drawing referenced to explain how an electronic device according to one embodiment determines measurement parameters when the depth sensor according to one embodiment is a stereo image (SI) type. FIG. 11 is a diagram illustrating a method for an electronic device according to one embodiment to display a virtual object. FIG. 12 is a flowchart illustrating a method of operation of an electronic device according to one embodiment. FIG. 13 is a diagram illustrating a method for an electronic device to acquire depth information according to one embodiment. FIG. 14 is a flowchart illustrating a method for an electronic device to acquire depth information according to one embodiment. FIG. 15 is a diagram showing an example in which an electronic device according to one embodiment repeatedly performs the depth information acquisition operations of FIG. 14. FIG. 16 is a diagram illustrating an example in which an electronic device according to one embodiment provides a virtual object in an augmented reality (AR) manner. FIG. 17 is a diagram showing an example of an electronic device according to one embodiment recognizing a human face using depth information. FIG. 18 is a block diagram showing the configuration of an electronic device according to one embodiment. FIG. 19 is a block diagram showing the configuration of an electronic device according to another embodiment. FIGS. 20 and 21 are drawings for explaining a method in which an electronic device according to one embodiment automatically adjusts the focus. FIG. 22 is a diagram illustrating a method for an electronic device according to one embodiment to perform line-of-sight-based spatial modeling. The terms used in this specification will be briefly explained, and the invention will be described in detail. The terms used in this invention have been selected based on currently widely used general terms, taking into account their functions within the invention; however, these terms may vary depending on the intent of those skilled in the art, case law, the emergence of new technologies, etc. Additionally, in specific cases, terms have been arbitrarily selected by the applicant, and in such cases, their meanings will be described in detail in the relevant description o