Search

US-20260127830-A1 - SYSTEMS AND METHODS FOR PRESENTING PERSPECTIVE VIEWS OF AUGMENTED REALITY VIRTUAL OBJECT

US20260127830A1US 20260127830 A1US20260127830 A1US 20260127830A1US-20260127830-A1

Abstract

Examples of the disclosure describe systems and methods for sharing perspective views of virtual content. In an example method, a virtual object is presented, via a display, to a first user. A first perspective view of the virtual object is determined, wherein the first perspective view is based on a position of the virtual object and a position of the first user. The virtual object is presented, via a display, to a second user, wherein the virtual object is presented to the second user according to the first perspective view. A second perspective view of the virtual object is determined, wherein the second perspective view is based on an input from the first user. The virtual object is presented, via a display, to the second user, wherein presenting the virtual object to the second user comprises presenting a transition from the first perspective view to the second perspective view.

Inventors

  • Marc Alan McCall

Assignees

  • MAGIC LEAP, INC.

Dates

Publication Date
20260507
Application Date
20251230

Claims (16)

  1. 1 . A method using one or more processors to perform steps comprising: presenting a virtual object via a display of a first wearable device; determining a first view of the virtual object, wherein the first view is based on a first vector representing a first position and direction of the virtual object relative to the first wearable device and a first size representing a first distance between the first wearable device and the virtual object; presenting the virtual object via a display of a second wearable device according to the first view; receiving an input indicating a change of the first wearable device to a second position and direction and a second distance, different from the first position and direction and the first distance; determining a second view of the virtual object, wherein the second view is based on a second vector representing the second position and direction of the virtual object relative to the first wearable device at the first size; and presenting the virtual object via the display of the second wearable device according to the second view.
  2. 2 . The method of claim 1 , wherein the first view of the virtual object is determined via a server.
  3. 3 . The method of claim 1 , wherein the first view of the virtual object is determined via one or more sensors of the first wearable device.
  4. 4 . The method of claim 1 , wherein the first view of the virtual object is determined based on an input to the first wearable device.
  5. 5 . The method of claim 1 , wherein the first view and the second each comprises a perspective view.
  6. 6 . The method of claim 1 , wherein one or more of the first wearable device and the second wearable device comprises a wearable head mounted device.
  7. 7 . The method of claim 1 , wherein presenting the virtual object via the display of the second wearable device according to the second view comprises presenting a transition from the first view to the second view.
  8. 8 . The method of claim 1 , wherein the second vector is different from the first vector.
  9. 9 . A system comprising: one or more processors in communication with a first wearable device and further in communication with a second wearable device, the one or more processors configured to perform a method comprising: presenting a virtual object via a display of a first wearable device; determining a first view of the virtual object, wherein the first view is based on a first vector representing a first position and direction of the virtual object relative to the first wearable device and a first size representing a first distance between the first wearable device and the virtual object; presenting the virtual object via a display of a second wearable device according to the first view; receiving an input indicating a change of the first wearable device to a second position and direction and a second distance, different from the first position and direction and the first distance; determining a second view of the virtual object, wherein the second view is based on a second vector representing the second position and direction of the virtual object relative to the first wearable device at the first size; and presenting the virtual object via the display of the second wearable device according to the second view.
  10. 10 . The system of claim 9 , wherein the first view of the virtual object is determined via a server.
  11. 11 . The system of claim 9 , wherein the first view of the virtual object is determined via one or more sensors of the first wearable device.
  12. 12 . The system of claim 9 , wherein the first view of the virtual object is determined based on an input to the first wearable device.
  13. 13 . The system of claim 9 , wherein the view comprises a perspective view.
  14. 14 . The system of claim 9 , wherein one or more of the first wearable device and the second wearable device comprises a wearable head mounted device.
  15. 15 . The system of claim 9 , wherein presenting the virtual object via the display of the second wearable device according to the second view comprises presenting a transition from the first view to the second view.
  16. 16 . The system of claim 9 , wherein the second vector is different from the first vector.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. patent application Ser. No. 18/431,729, filed Feb. 2, 2024. U.S. patent application Ser. No. 18/431,729 is a continuation of U.S. patent application Ser. No. 18/152,035, filed Jan. 9, 2023. U.S. patent application Ser. No. 18/152,035 is a continuation of U.S. patent application Ser. No. 17/374,738, filed Jul. 13, 2021. U.S. patent application Ser. No. 17/374,738 is a continuation of U.S. patent application Ser. No. 16/582,880, filed on Sep. 25, 2019. U.S. patent application Ser. No. 16/582,880 is a non-provisional application of U.S. Provisional Application No. 62/736,432, filed on Sep. 25, 2018. This application claims priority to each of U.S. patent application Ser. No. 18/431,729, U.S. patent application Ser. No. 18/152,035, U.S. patent application Ser. No. 17/374,738, U.S. patent application Ser. No. 16/582,880, and U.S. Provisional Application No. 62/736,432, each of which is additionally incorporated herein by reference. TECHNICAL FIELD This disclosure relates in general to systems and methods for sharing and presenting visual signals, and in particular to systems and methods for sharing and presenting visual signals corresponding content in a mixed reality environment. BACKGROUND Modem computing and display technologies have facilitated the development of systems for so called “virtual reality” or “augmented reality” experiences, wherein digitally reproduced images or portions thereof are presented to a user in a manner wherein they seem to be, or may be perceived as, real. A virtual reality, or “VR”, scenario typically involves presentation of digital or virtual image information without transparency to other actual real-world visual input; an augmented reality, or “AR”, scenario typically involves presentation of digital or virtual image information as an augmentation to visualization of the actual world around the user. For example, referring to FIG. 1, an augmented reality scene (4) is depicted wherein a user of an AR technology sees a real-world park-like setting (6) featuring people, trees, buildings in the background, and a concrete platform (1120). In addition to these items, the user of the AR technology also perceives that he “sees” a robot statue (1110) standing upon the real-world platform (1120), and a cartoon-like avatar character (2) flying by which seems to be a personification of a bumble bee, even though these elements (2, 1110) do not exist in the real world. Correct placement of this virtual imagery in the real world for life-like augmented reality (or “mixed reality”) requires a series of intercoupled coordinate frameworks. The human visual perception system is very complex, and producing a VR or AR technology that facilitates a comfortable, natural-feeling, rich presentation of virtual image elements amongst other virtual or real-world imagery elements is challenging. For instance, head-worn AR displays (or helmet-mounted displays, or smart glasses) typically are at least loosely coupled to a user's head, and thus move when the user's head moves. Display components, such as eyepieces for a head-mounted display, may be asymmetrically positioned to a user's eyes. For example, a binocular system may place one eyepiece closer or farther to a given eye (e.g., as compared to a complementary eyepiece and eye). In a monocular system, alignment of the monolithic eyepiece may be at an angle, such that a left/right eye is not similarly positioned to the other eye. Complicating the variation in fit as described above is the motion of the user's head or changes to the user's position otherwise. As an example, if a user wearing a head-worn display views a virtual representation of a three-dimensional (3D) object on the display and walks around the area where the 3D object appears, that 3D object may be re-rendered for each viewpoint, giving the user the perception that he or she is walking around an object that occupies real space. If the head-worn display is used to present multiple objects within a virtual space (for instance, a rich virtual world), measurements of head pose (i.e., the location and orientation of the user's head) can be used to re-render the scene to match the user's dynamically changing head location and orientation and provide an increased sense of immersion in the virtual space. In AR systems, detection or calculation of head pose can facilitate the display system to render virtual objects such that they appear to occupy a space in the real world in a manner that makes sense to the user. In some augmented reality technology such as Google Glass®, virtual content is displayed in a fixed position. In such examples, the virtual content and the device share a common coordinate frame, as any motion of the device will similarly change the position of the virtual content. In some augmented reality or mixed reality systems, a series of coordinate frames ensures the virtual content appears fixed to