Search

US-20260127832-A1 - Method and Device for Dynamic Determination of Presentation and Transitional Regions

US20260127832A1US 20260127832 A1US20260127832 A1US 20260127832A1US-20260127832-A1

Abstract

In one implementation, a method for dynamically determining view distance of virtual content presented within a generated presentation region is disclosed. The method includes obtaining a first dimension associated with a physical environment; and detecting a request to cause presentation of virtual content. In response to detecting the request, the method also includes obtaining a second dimension associated with the virtual content; determining a view distance based at least in part on the first and second dimensions; and generating generate a presentation region for the virtual content based at least in part on the view distance. The method further includes presenting the virtual content within the presentation region.

Inventors

  • Benjamin H. Boesel
  • David H. Huang
  • Jonathan PERRON
  • Shih-Sang Chiu

Assignees

  • APPLE INC.

Dates

Publication Date
20260507
Application Date
20260102

Claims (20)

  1. 1 . A method comprising: at a computing system including non-transitory memory and one or more processors, wherein the computing system is communicatively coupled to a display device and one or more input devices: obtaining a first dimension associated with a physical environment; detecting, via the one or more input devices, a request to cause presentation of virtual content; in response to detecting the request: obtaining a second dimension associated with the virtual content; determining a view distance based at least in part on the first and second dimensions; and generating a presentation region for the virtual content based at least in part on the view distance; and presenting, via the display device, the virtual content within the presentation region.
  2. 2 . The method of claim 1 , wherein the first dimension comprises at least one of a width, height, depth, or volumetric dimension of the physical environment.
  3. 3 . The method of claim 1 , wherein the second dimension comprises at least one of a width, height, depth, or preferred viewing size of the virtual content.
  4. 4 . The method of claim 1 , wherein the request to cause presentation of the virtual content corresponds to one of a voice command, a gestural command, or a selection from a user interface (UI) menu.
  5. 5 . The method of claim 1 , wherein determining the view distance includes comparing the first dimension and the second dimensions.
  6. 6 . The method of claim 1 , wherein the view distance is greater than a physical viewing distance permitted by the physical environment.
  7. 7 . The method of claim 1 , wherein generating the presentation region includes positioning the presentation region at a virtual distance offset from a physical surface of the physical environment.
  8. 8 . The method of claim 1 , wherein presenting the virtual content within the presentation region includes a phase-in animation for the virtual content by: initially presenting the virtual content without presenting the presentation region; and after a predefined time period, presenting the virtual content within the presentation region, wherein the presentation region is overlaid on the view of the physical environment.
  9. 9 . The method of claim 1 , wherein the presentation region is generated based on a personal radius in addition to the first and second dimensions.
  10. 10 . The method of claim 1 , further comprising: modifying the virtual content based at least in part on the first dimension, second dimension, or the determined view distance.
  11. 11 . The method of claim 10 , wherein the virtual content is larger than may be displayed within the physical environment without the presentation region.
  12. 12 . The method of claim 1 , further comprising: detecting a change from a first camera pose to a second camera pose relative to the presentation region; and in response to detecting the change from the first camera pose to the second camera pose, updating the virtual content presented within the presentation region based on the second camera pose.
  13. 13 . The method of claim 12 , further comprising: in response to detecting the change from the first camera pose to the second camera pose, updating an apparent view distance or scale of the virtual content based on the second camera pose.
  14. 14 . The method of claim 1 , further comprising: detecting a change from a first camera pose to a second camera pose relative to the presentation region; and in response to detecting the change from the first camera pose to the second camera pose relative to the presentation region, presenting second virtual content within the presentation region.
  15. 15 . The method of claim 1 , wherein the presentation region is overlaid on a view of the physical environment.
  16. 16 . The method of claim 1 , wherein the presentation region is generated such that the virtual content appears at a scale corresponding to the determined view distance.
  17. 17 . The method of claim 1 , wherein the view distance is determined further based on a personal radius associated with a user.
  18. 18 . The method of claim 1 , wherein obtaining the first dimension includes obtaining the first dimension using one or more depth sensors, image sensors, or simultaneous localization and mapping data.
  19. 19 . A device comprising: one or more processors; a non-transitory memory; an interface for communicating with a display device and one or more input devices; and one or more programs stored in the non-transitory memory, which, when executed by the one or more processors, cause the device to: obtain a first dimension associated with a physical environment; detect, via the one or more input devices, a request to cause presentation of virtual content; in response to detecting the request: obtain a second dimension associated with the virtual content; determine a view distance based at least in part on the first and second dimensions; and generate a presentation region for the virtual content based at least in part on the view distance; and present, via the display device, the virtual content within the presentation region.
  20. 20 . A non-transitory memory storing one or more programs, which, when executed by one or more processors of a device with an interface for communicating with a display device and one or more input devices, cause the device to: obtain a first dimension associated with a physical environment; detect, via the one or more input devices, a request to cause presentation of virtual content; in response to detecting the request: obtain a second dimension associated with the virtual content; determine a view distance based at least in part on the first and second dimensions; and generate a presentation region for the virtual content based at least in part on the view distance; and present, via the display device, the virtual content within the presentation region.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of U.S. Non-Provisional patent application Ser. No. 18/123,478, filed on Mar. 20, 2023, which is a continuation of Intl. Patent App. No. PCT/US2021/043293, filed on Jul. 27, 2021, which claims priority to U.S. Provisional Patent App. No. 63/080,923, filed on Sep. 21, 2020, which are hereby incorporated by reference in their entireties. TECHNICAL FIELD The present disclosure generally relates to content delivery and, in particular, to systems, methods, and methods for dynamically determining presentation and transitional regions for content delivery. BACKGROUND In some instances, it may be difficult to present extended reality (XR) content at the proper proportions due to a lack of view distance while in tight quarters such as an airplane or an automobile. However, simply overlaying the XR content in a “portal” that allows for an artificially greater view distance may be jarring to the user due to visual discontinuities therebetween. BRIEF DESCRIPTION OF THE DRAWINGS So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings. FIG. 1 is a block diagram of an example operating architecture in accordance with some implementations. FIG. 2 is a block diagram of an example controller in accordance with some implementations. FIG. 3 is a block diagram of an example electronic device in accordance with some implementations. FIG. 4A is a block diagram of an example content delivery architecture in accordance with some implementations. FIG. 4B illustrates example data structures for first and second sets of characteristics in accordance with some implementations. FIGS. 5A-5E illustrate a sequence of instances for a dynamic content delivery scenario in accordance with some implementations. FIG. 6 illustrates an instance for another content delivery scenario in accordance with some implementations. FIG. 7 is a flowchart representation of a method of dynamically determining presentation and transitional regions for content delivery in accordance with some implementations. In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method, or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures. SUMMARY Various implementations disclosed herein include devices, systems, and methods for dynamically determining presentation and transitional regions for content delivery. According to some implementations, the method is performed at a computing system including non-transitory memory and one or more processors, wherein the computing system is communicatively coupled to a display device and one or more input devices. The method includes obtaining a first set of characteristics associated with a physical environment; and detecting, via the one or more input devices, a request to cause presentation of virtual content (sometimes also referred to herein as “XR content” or “graphical content”). In response to detecting the request, the method also includes obtaining a second set of characteristics associated with the virtual content; generating a presentation region for the virtual content based at least in part on the first and second sets of characteristics; and generating a transitional region provided to at least partially surround the presentation region based at least in part on the first and second sets of characteristics. The method further includes concurrently presenting, via the display device, the virtual content within the presentation region and the transitional region at least partially surrounding the presentation region. In accordance with some implementations, an electronic device includes one or more displays, one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes: one or more displays, one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein. In accordance with some implementations, a co