Search

US-12626455-B2 - Depth aware content rendering

US12626455B2US 12626455 B2US12626455 B2US 12626455B2US-12626455-B2

Abstract

An eXtended Reality (XR) system provides methodologies for clipping virtual content displayed to a user. The XR system generates an XR user interface with virtual content using an XR user interface model. The XR system generates clipped virtual content from the virtual content by clipping virtual content that is located outside of the user's stereoscopic field of view and provides the XR user interface containing the clipped virtual content to the user.

Inventors

  • Russell Douglas Patton
  • James POWDERLY

Assignees

  • SNAP INC.

Dates

Publication Date
20260512
Application Date
20231010

Claims (20)

  1. 1 . A computer-implemented method comprising: generating, by at least one processor, a 3D XR user interface comprising virtual content; determining, by the at least one processor, using 3D coordinate data of the virtual content, one or more portions of the virtual content located outside of a stereoscopic field of view of a user; determining, by the at least one processor, a near clipping plane at a first specified distance from a head-wearable apparatus worn by the user and a far clipping plane at a second specified distance from the head-wearable apparatus; performing, by the at least one processor, a conic projection from the near clipping plane to the far clipping plane, the conic projection having an opening angle using a start angle and an end angle of the stereoscopic field of view; generating, by the at least one processor, a clipping mask using the conic projection; clipping, by the at least one processor, from the XR user interface, the one or more portions of the virtual content located outside of a stereoscopic field of view of the user using the clipping mask; and providing, by the at least one processor, the XR user interface to the user.
  2. 2 . The computer-implemented method of claim 1 , wherein clipping the portion of virtual content comprises: generating a clipping mask based on the stereoscopic field of view of the user; and clipping the virtual content not located within the stereoscopic field of view of the user using the clipping mask.
  3. 3 . The computer-implemented method of claim 2 , wherein the clipping mask is a clipping volume.
  4. 4 . The computer-implemented method of claim 1 , wherein clipping the portion of the virtual content comprises generating a monoscopic view of the virtual content in a left monoscopic field of view by clipping the virtual content from a right virtual content rendered image of the virtual content.
  5. 5 . The computer-implemented method of claim 1 , wherein clipping the portion of the virtual content comprises generating a monoscopic view of the virtual content in a right monoscopic field of view by clipping the virtual content from a left virtual content rendered image of the virtual content.
  6. 6 . The computer-implemented method of claim 1 , wherein clipping the portion of the virtual content comprises clipping the virtual content from a left virtual content rendered image of the virtual content and a right virtual content rendered image of the virtual content.
  7. 7 . The computer-implemented method of claim 1 , wherein the XR user interface is provided to the user using a head-wearable apparatus.
  8. 8 . A machine comprising: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the machine to perform operations comprising: generating a 3D XR user interface comprising virtual content; determining using 3D coordinate data of the virtual content, one or more portions of the virtual content located outside of a stereoscopic field of view of a user; determining a near clipping plane at a first specified distance from a head-wearable apparatus worn by the user and a far clipping plane at a second specified distance from the head-wearable apparatus; performing a conic projection from the near clipping plane to the far clipping plane the conic projection having an opening angle using a start angle and an end angle of the stereoscopic field of view; generating a clipping mask using the conic projection; clipping from the XR user interface, the one or more portions of the virtual content located outside of a stereoscopic field of view of the user using the clipping mask; and providing the XR user interface to the user.
  9. 9 . The machine of claim 8 , wherein clipping the portion of virtual content comprises: generating a clipping mask based on the stereoscopic field of view of the user; and clipping the virtual content not located within the stereoscopic field of view of the user using the clipping mask.
  10. 10 . The machine of claim 9 , wherein the clipping mask is a clipping volume.
  11. 11 . The machine of claim 8 , wherein clipping the portion of the virtual content comprises generating a monoscopic view of the virtual content in a left monoscopic field of view by clipping the virtual content from a right virtual content rendered image of the virtual content.
  12. 12 . The machine of claim 8 , wherein clipping the portion of the virtual content comprises generating a monoscopic view of the virtual content in a right monoscopic field of view by clipping the virtual content from a left virtual content rendered image of the virtual content.
  13. 13 . The machine of claim 8 , wherein clipping the portion of the virtual content comprises clipping the virtual content from a left virtual content rendered image of the virtual content and a right virtual content rendered image of the virtual content.
  14. 14 . The machine of claim 8 , wherein the machine comprises a head-wearable apparatus.
  15. 15 . A non-transitory machine-readable storage medium including instructions that, when executed by a machine, cause the machine to perform operations comprising: generating, by at least one processor, a 3D XR user interface comprising virtual content; determining, by the at least one processor, using 3D coordinate data of the virtual content, one or more portions of the virtual content located outside of a stereoscopic field of view of a user; determining a near clipping plane at a first specified distance from a head-wearable apparatus worn by the user and a far clipping plane at a second specified distance from the head-wearable apparatus; performing a conic projection from the near clipping plane to the far clipping plane, the conic projection having an opening angle using a start angle and an end angle of the stereoscopic field of view; generating a clipping mask using the conic projection; clipping, by the at least one processor, from the XR user interface, the one or more portions of the virtual content located outside of a stereoscopic field of view of the user using the clipping mask; and providing, by the at least one processor, the XR user interface to the user.
  16. 16 . The non-transitory machine-readable storage medium of claim 15 , wherein clipping the portion of virtual content comprises: generating a clipping mask based on the stereoscopic field of view of the user; and clipping the virtual content not located within the stereoscopic field of view of the user using the clipping mask.
  17. 17 . The non-transitory machine-readable storage medium of claim 15 , wherein clipping the portion of the virtual content comprises generating a monoscopic view of the virtual content in a left monoscopic field of view by clipping the virtual content from a right virtual content rendered image of the virtual content.
  18. 18 . The non-transitory machine-readable storage medium of claim 15 , wherein clipping the portion of the virtual content comprises generating a monoscopic view of the virtual content in a right monoscopic field of view by clipping the virtual content from a left virtual content rendered image of the virtual content.
  19. 19 . The non-transitory machine-readable storage medium of claim 15 , wherein clipping the portion of the virtual content comprises clipping the virtual content from a left virtual content rendered image of the virtual content and a right virtual content rendered image of the virtual content.
  20. 20 . The non-transitory machine-readable storage medium of claim 15 , wherein the XR user interface is provided to the user using a head-wearable apparatus.

Description

TECHNICAL FIELD The present disclosure relates generally to user interfaces and more particularly to user interfaces used for augmented or virtual reality. BACKGROUND A head-wearable apparatus may be implemented with a transparent or semi-transparent display through which a user of the head-wearable apparatus can view the surrounding environment. Such head-wearable apparatuses enable a user to see through the transparent or semi-transparent display to view the surrounding environment, and to also see virtual content (e.g., a rendering of a 2D (2D) or 3D (3D) graphic model, images, video, text, and so forth) that are generated for display to appear as a part of, and/or overlaid upon, the surrounding environment. This is typically referred to as “augmented reality” or “AR.” A head-wearable apparatus may additionally completely occlude a user's visual field and display a virtual environment through which a user may move or be moved. This is typically referred to as “virtual reality” or “VR.” In a hybrid form, a view of the surrounding environment is captured using cameras, and then that view is displayed along with augmentation to the user on displays that occlude the user's eyes. As used herein, the term eXtended Reality (XR) refers to augmented reality, virtual reality and any of hybrids of these technologies unless the context indicates otherwise. A user of the head-wearable apparatus may access and use a computer software application to perform various tasks or engage in an entertaining activity. To use the computer software application, the user interacts with a user interface provided by the head-wearable apparatus. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS In the drawings, which are not necessarily drawn to scale, like numerals may describe similar components in different views. To easily identify the discussion of any particular element or act, the most significant digit or digits in a reference number refer to the figure number in which that element is first introduced. Some non-limiting examples are illustrated in the figures of the accompanying drawings in which: FIG. 1 is an illustration of a viewer viewing a real-world scene, in accordance with some examples. FIG. 2A is a perspective view of a head-worn device, in accordance with some examples. FIG. 2B illustrates a further view of the head-worn device of FIG. 2A, in accordance with some examples. FIG. 3 is a diagrammatic representation of a machine in the form of a computer system within which a set of instructions may be executed to cause the machine to perform any one or more of the methodologies discussed herein, in accordance with some examples. FIG. 4A illustrates a collaboration diagram of components of an XR system, in accordance with some examples. FIG. 4B illustrates a depth aware content rendering method in accordance with some examples. FIG. 4C illustrates a display of virtual content and associated perceived physical objects or surfaces, in accordance with some examples. FIG. 4D illustrates an XR system providing a display of virtual content to a user where a portion of the virtual content is in a stereoscopic field of view of a user and a portion of the virtual content is outside of the stereoscopic field of view of the user, in accordance with some examples. FIG. 4E illustrates an XR system providing a display of virtual content to a user where virtual content outside of a stereoscopic field of view of a user has been clipped, in accordance with some examples. FIG. 5 illustrates a system of an XR system having a head-wearable apparatus, in accordance with some examples. FIG. 6 is a diagrammatic representation of a networked environment in which an XR system may be deployed, in accordance with some examples. FIG. 7 is a diagrammatic representation of a data structure as maintained in a database, in accordance with some examples. FIG. 8 is a diagrammatic representation of a messaging system that has both client-side and server-side functionality, in accordance with some examples. FIG. 9 is a block diagram showing a software architecture, in accordance with some examples. DETAILED DESCRIPTION FIG. 1 is an illustration of a viewer viewing a real-world scene. Virtual content comprising virtual objects or virtual surfaces, such as virtual content 114, in the real-world scene appear to the viewer 102 as having depth when the virtual content 114 is located within the viewer's stereoscopic field of view 104. The stereoscopic field of view 104 is a field of view comprised of an intersection between a right eye field of view and a left eye field of view of a viewer 102. Virtual content that does not fall within the stereoscopic field of view 104, such as virtual content 110 and virtual content 116, may not appear as 3D objects to the viewer 102. In addition, virtual content, such as virtual content 112, may fall on an edge between the stereoscopic field of view 104 and a monoscopic field of view, such as right monoscopic field of