Search

KR-20260068120-A - DETERMINING GAZE DIRECTION TO GENERATE AUGMENTED REALITY CONTENT

KR20260068120AKR 20260068120 AKR20260068120 AKR 20260068120AKR-20260068120-A

Abstract

The target technology uses an eyeglass device to determine the gaze direction within the user's field of vision. The target technology generates an anchor point within the field of vision based at least partially on the determined gaze direction. The target technology identifies a surface corresponding to the horizon plane within the field of vision. The target technology determines the distance from the identified surface to the anchor point. The target technology generates AR content based at least partially on the determined distance. The target technology renders the generated AR content into the field of vision for display by the eyeglass device.

Inventors

  • 굿리치, 카일

Assignees

  • 스냅 인코포레이티드

Dates

Publication Date
20260513
Application Date
20211228
Priority Date
20201231

Claims (20)

  1. As a method, A step of determining the direction of gaze in the user's field of vision using an eyeglasses device; A step of generating an anchor point in the field of view based at least partially on the determined gaze direction; A step of identifying a surface corresponding to a horizon plane in the above field of view—the step of identifying a surface corresponding to a horizon plane is based on a surface detection process, and the surface detection process is: The method includes the step of performing a hit test on a point cloud to determine a first surface plane within the field of view, and The above heat testing determines the intersection point of at least one feature point corresponding to the first surface plane below the intermediate feature point in the point cloud, and The steps for performing hit testing are: The method includes the step of generating a three-dimensional line that includes a starting position corresponding to the position of the user's pupil or iris and extends from the gaze direction to a single feature point corresponding to the first surface plane within the field of view. A step of determining the distance from the identified surface to the anchor point; A step of generating augmented reality content based at least partially on the distance determined above; and A method comprising the step of rendering the generated augmented reality content into the field of view for display by the glasses device.
  2. In paragraph 1, The step of generating the above anchor point is: A method comprising the step of selecting a point in the field of view by performing a ray projection operation based at least partially on the above-mentioned gaze direction, wherein the point corresponds to the anchor point.
  3. In paragraph 2, The above ray projection operation is: A step of determining the position of the user's pupil or iris; A step of projecting a light ray from the position of the pupil or iris in a direction toward the field of view—the field of view comprises a set of pixels—; A step of determining at least one pixel from a field of view that intersects the above ray; and A method comprising the step of selecting at least one pixel as the anchor point.
  4. In paragraph 1, The step of determining the gaze direction above is a method based on the user's head orientation and the relative position of the pupil or iris using the eyeglass device.
  5. In paragraph 1, A method further comprising the step of creating a second anchor point within the identified surface.
  6. In paragraph 5, The step of determining the distance from the identified surface to the anchor point is: A step of determining a first distance between the second anchor point and the anchor point in the above field of view—the second anchor point is in the position below with respect to the anchor point—; and A method comprising the step of selecting a specific position along the first distance between the second anchor point and the anchor point.
  7. In paragraph 6, The step of rendering the above-generated augmented reality content into the field of view for display by the glasses device is: Step of creating a first three-dimensional object; A method comprising the step of rendering the first three-dimensional object at the specific position mentioned above.
  8. In Paragraph 7, A step of creating a second three-dimensional object; and A method further comprising the step of rendering the second three-dimensional object at a second position above or below the first three-dimensional object.
  9. In paragraph 8, A method in which the first three-dimensional object is an object of a different type from the second three-dimensional object.
  10. In paragraph 8, A method in which the first three-dimensional object is an object of the same type as the second three-dimensional object.
  11. As a system, processor; and The memory includes instructions, and when the instructions are executed by the processor, the processor: An action that determines the direction of gaze in the user's field of vision using an eyeglass device; The operation of generating an anchor point in the field of view based at least partially on the determined gaze direction; An operation to identify a surface corresponding to a horizon plane in the above field of view—the operation to identify a surface corresponding to the above horizon plane is based on a surface detection process, and the surface detection process is: It includes performing hit testing on a point cloud to determine a first surface plane within the above field of view, and The above heat testing determines the intersection point of at least one feature point corresponding to the first surface plane below the intermediate feature point in the point cloud, and Performing hit testing is: Includes generating a three-dimensional line that includes a starting position corresponding to the position of the user's pupil or iris and extends from the gaze direction to a single feature point corresponding to the first surface plane within the field of view. An operation to determine the distance from the identified surface to the anchor point; The operation of generating augmented reality content based at least partially on the distance determined above; and A system that performs operations including rendering the generated augmented reality content into the field of view for display by the glasses device.
  12. In Paragraph 11, The operation of generating the above anchor point is: A system comprising an operation to select a point in the field of view by performing a ray projection operation based at least partially on the above-mentioned gaze direction, wherein the point corresponds to the anchor point.
  13. In Paragraph 12, The above ray projection operation is: An action to determine the position of the user's pupil or iris; The action of projecting a light ray from the position of the pupil or iris toward the field of view—the field of view comprises a set of pixels—; An operation to determine at least one pixel from a field of view that intersects the above ray; and A system comprising the operation of selecting at least one pixel as the anchor point.
  14. In Paragraph 11, The operation of determining the above gaze direction is a system based on the user's head orientation and the relative position of the pupil or iris using the above eyeglass device.
  15. In Paragraph 11, The above operations are: A system further comprising the operation of creating a second anchor point within the identified surface.
  16. In paragraph 15, The operation of determining the distance from the identified surface to the anchor point is: The operation of determining the first distance between the second anchor point and the anchor point in the above field of view—the second anchor point is in the position below with respect to the anchor point—; and A system comprising the operation of selecting a specific position along the first distance between the second anchor point and the anchor point.
  17. In Paragraph 16, The operation of rendering the above-generated augmented reality content into the field of view for display by the above-mentioned glasses device is: The operation of creating a first three-dimensional object; A system including the operation of rendering the first three-dimensional object at the above-mentioned specific position.
  18. In Paragraph 17, The above operations are: The operation of creating a second three-dimensional object; and A system further comprising the operation of rendering the second three-dimensional object at a second position above or below the first three-dimensional object.
  19. In Paragraph 18, A system in which the first three-dimensional object is an object of a different type from the second three-dimensional object.
  20. As a non-transient computer-readable medium containing instructions, When the above instructions are executed by a computing device, the computing device: An action that determines the direction of gaze in the user's field of vision using an eyeglass device; The operation of generating an anchor point in the field of view based at least partially on the determined gaze direction; An operation to identify a surface corresponding to a horizon plane in the above field of view—the operation to identify a surface corresponding to the above horizon plane is based on a surface detection process, and the surface detection process is: It includes performing hit testing on a point cloud to determine a first surface plane within the above field of view, and The above heat testing determines the intersection point of at least one feature point corresponding to the first surface plane below the intermediate feature point in the point cloud, and Performing hit testing is: Includes generating a three-dimensional line that includes a starting position corresponding to the position of the user's pupil or iris and extends from the gaze direction to a single feature point corresponding to the first surface plane within the field of view. An operation to determine the distance from the identified surface to the anchor point; The operation of generating augmented reality content based at least partially on the distance determined above; and A non-transient computer-readable medium that enables the performance of operations including rendering the generated augmented reality content into the field of view for display by the glasses device.

Description

Determining Gaze Direction to Generate Augmented Reality Content This application claims the benefit of priority of U.S. provisional patent application No. 63/133,143 filed December 31, 2020, the entirety of which is incorporated herein by reference for all purposes. With the increased use of digital images, the availability of portable computing devices, the availability of increased capacity in digital storage media, and the increased bandwidth and accessibility of network connections, digital images have become a part of the daily lives of an increasing number of people. Some electronics-enabled eyewear devices, such as so-called smart glasses, allow users to interact with virtual content while engaging in certain activities. Users wear these eyewear devices and can view the real-world environment through them while interacting with virtual content displayed by the devices. To facilitate the identification of the discussion of any specific element or act, the top digit or numbers in the reference number refer to the drawing number where the element is first introduced. FIG. 1 is a schematic representation of a network environment in which the present disclosure may be placed, according to some exemplary embodiments. FIG. 2 is a schematic representation of a messaging client application according to some exemplary embodiments. FIG. 3 is a schematic representation of a data structure as maintained in a database, according to some exemplary embodiments. FIG. 4 is a schematic representation of a message according to some exemplary embodiments. FIG. 5 shows a front perspective view of a pair of smart glasses-type eyewear devices including an eyewear system according to one exemplary embodiment. FIG. 6 is a schematic diagram illustrating the structure of message annotations as described in FIG. 4, which includes additional information corresponding to a given message according to some embodiments. FIG. 7 is a block diagram illustrating various modules of an eyewear system according to specific exemplary embodiments. FIGS. 8A and FIGS. 8B illustrate examples of tracking the direction of gaze to perform operation(s) by an eyeglass system according to implementations of the target technology. FIG. 9 illustrates examples of AR content generated in the user's field of vision based on the user's determined gaze direction while using an eyeglass device. FIG. 10 is a flowchart illustrating a method according to specific exemplary embodiments. FIG. 11 is a block diagram illustrating a software architecture in which the present disclosure can be implemented according to some exemplary embodiments. FIG. 12 is a schematic representation of a machine in the form of a computer system in which a set of instructions can be executed therein to cause the machine to perform any one or more of the methodologies discussed, according to some exemplary embodiments. Users with a range of interest from various locations can capture digital images of various objects and make the captured images available to others through networks such as the Internet. To enhance users' experience with digital images and provide various features, it can be difficult and computationally intensive to enable computing devices to perform image processing operations on various objects and/or features captured under a wide range of varying conditions (e.g., changes in image scales, noise, lighting, motion, or geometric distortion). Augmented reality technology aims to bridge the gap between virtual and real-world environments by providing an enhanced real-world environment augmented by electronic information. As a result, the electronic information appears to be part of the real-world environment as perceived by the user. In an example, augmented reality technology further provides a user interface for interacting with electronic information overlaid on the enhanced real-world environment. Augmented reality (AR) systems enable real and virtual environments to be combined to varying degrees to facilitate real-time interactions from users. Accordingly, as described herein, such AR systems may include various possible combinations of real and virtual environments, including augmented reality that is closer to the real environment than a virtual environment that primarily contains real elements (e.g., lacks real elements). In this way, the real environment can be connected to the virtual environment by the AR system. A user immersed in the AR environment can navigate this environment, and the AR system can track the user's viewpoint to provide visualizations based on how the user is positioned within the environment. Augmented reality (AR) experiences may be provided in a messaging client application (or messaging system) as described in the embodiments of this specification. Embodiments of the target technology described in this specification enable various operations involving AR content to capture and modify such content with a given electronic device, such a