Search

JP-2026514320-A - How to manage duplicate windows and apply visual effects

JP2026514320AJP 2026514320 AJP2026514320 AJP 2026514320AJP-2026514320-A

Abstract

In some embodiments, the computer system modifies the visual prominence of individual virtual objects in response to detecting a threshold overlap between a first virtual object and a second virtual object. In some embodiments, the computer system modifies the visual prominence of individual virtual objects based on a change in the spatial location of the first virtual object relative to the second virtual object. In some embodiments, the computer system applies visual effects to physical objects, virtual environments, and/or representations of the physical environment. In some embodiments, the computer system modifies the visual prominence of virtual objects relative to a three-dimensional environment based on the display of different types of overlapping objects in the three-dimensional environment. In some embodiments, the computer system modifies the opacity level of the first virtual object overlapping the second virtual object in response to the movement of the first virtual object.

Inventors

  • ソレンティーノ, ウィリアム エー., サード
  • コロンバトヴィッチ, キャサリン ダブリュー.
  • プレック, マシュー ジー.
  • ボエセル, ベンジャミン エイチ.
  • ラヴァツ, ジョナサン
  • クリヴォルチコ, エヴゲニイ
  • パストラナ ヴィセンテ, イスラエル
  • シュムック, ブランドン ケイ.
  • ハイラック, ベンジャミン
  • マッケンジー, クリストファー ディー.
  • ルメイ, ステファン オー.
  • テイラー, ゾーイ シー.
  • エスタニー ロドリゲス, ミケル
  • オーウェン, ジェイムズ ジェイ.
  • デッセロ, ジェイムズ エム.
  • アレン, ジェフリー エス.

Assignees

  • アップル インコーポレイテッド

Dates

Publication Date
20260511
Application Date
20240604
Priority Date
20230604

Claims (20)

  1. In a computer system that communicates with one or more input devices and display generation components, Displaying a plurality of virtual objects, including a first virtual object and a second virtual object, in a first spatial relationship within a three-dimensional environment with respect to the current viewpoint of the user of the computer system via the display generation component, includes displaying the first virtual object and the second virtual object without overlap with respect to the user's current viewpoint, the first virtual object and the second virtual object being displayed with a first visual prominence in the three-dimensional environment, and displaying the first virtual object and the second virtual object in the first spatial relationship. To detect a first input via one or more input devices that corresponds to a request to change the spatial relationship between the first virtual object and the second virtual object from the first spatial relationship to a second spatial relationship different from the first spatial relationship, with respect to the user's current viewpoint, In response to detecting the first input, In accordance with the determination that at least a portion of the first virtual object overlaps the second virtual object by a threshold amount from the user's current viewpoint, the display generation component displays individual portions of individual virtual objects among the plurality of virtual objects with a second visual splendor that is smaller than the first visual splendor in the three-dimensional environment, A method comprising: displaying the individual portions of the individual virtual objects with first visual prominence in the three-dimensional environment via the display generation component, in accordance with the determination that the first virtual object does not overlap the second virtual object by a threshold amount from the user's current viewpoint.
  2. In accordance with the determination that the first input includes attention directed to the first virtual object, the individual virtual object among the plurality of virtual objects is the second virtual object, and the method After detecting the first input, a second input corresponding to attention directed towards the second virtual object is detected, In response to detecting the second input, and in accordance with the determination that at least a portion of the first virtual object overlaps the second virtual object by a threshold amount from the user's current viewpoint, Displaying the individual parts of the second virtual object with the first visual prominence in the three-dimensional environment, The method according to claim 1, further comprising displaying individual parts of the first virtual object with the second visual prominence in relation to the three-dimensional environment.
  3. After detecting the first input, while displaying the first virtual object with the first visual prominence, a second input corresponding to attention directed towards the second virtual object is detected. In response to detecting the second input, and in accordance with the determination that at least a portion of the first virtual object overlaps the second virtual object by a threshold amount from the user's current viewpoint, Displaying individual parts of the second virtual object with the first visual prominence in the three-dimensional environment, The method according to claim 1 or 2, further comprising displaying individual parts of the first virtual object with the second visual prominence in relation to the three-dimensional environment.
  4. After detecting the second input, while displaying the individual parts of the second virtual object with the first visual prominence, a third input corresponding to attention directed towards a third virtual object among the plurality of virtual objects in the three-dimensional environment is detected, In response to detecting the third input, and in accordance with the determination that at least a portion of the third virtual object overlaps the second virtual object by a threshold amount from the user's current viewpoint, Displaying the individual parts of the second virtual object with the second visual prominence in the three-dimensional environment, The method according to claim 3, further comprising maintaining the display of the individual parts of the first virtual object with the second visual prominence in the three-dimensional environment.
  5. In response to detecting the third input, and in accordance with the determination that the third virtual object does not overlap the second virtual object by more than the threshold amount from the user's current viewpoint, Maintaining the display of the individual parts of the second virtual object with the first visual prominence in the three-dimensional environment, The method according to claim 4, further comprising maintaining the display of the individual parts of the first virtual object with the second visual prominence in the three-dimensional environment.
  6. In response to detecting the second input, and in accordance with the determination that at least a portion of the first virtual object overlaps the second virtual object by a threshold amount from the user's current viewpoint, and at least a portion of the second virtual object overlaps the third virtual object among the plurality of virtual objects in the three-dimensional environment by a threshold amount from the user's current viewpoint, Displaying the individual parts of the second virtual object with the first visual prominence in the three-dimensional environment, Displaying the individual parts of the first virtual object with the second visual prominence in the three-dimensional environment, The method according to claim 3, further comprising displaying individual parts of the third virtual object with the second visual prominence relative to the three-dimensional environment.
  7. While the plurality of virtual objects are displayed in the three-dimensional environment, input elements associated with each individual virtual object are displayed in the three-dimensional environment. In response to detecting the first input, In accordance with the determination that at least a portion of the first virtual object overlaps the second virtual object by a threshold amount from the user's current viewpoint, the input element is displayed with a third visual splendor that is smaller than the first visual splendor in the three-dimensional environment, The method according to any one of claims 1 to 6, further comprising displaying the input element with a fourth visual splendor greater than the second visual splendor to the three-dimensional environment, in accordance with the determination that the first virtual object does not overlap the second virtual object by a threshold amount from the user's current viewpoint.
  8. After detecting the first input, a second input is detected that corresponds to a request to display an input element associated with a third virtual object among the plurality of virtual objects in the three-dimensional environment, In response to detecting the second input, To discontinue displaying the input elements within the three-dimensional environment associated with the individual virtual objects, The method according to claim 7, further comprising displaying the input elements within the three-dimensional environment associated with the third virtual object.
  9. The individual part of the individual virtual object among the plurality of virtual objects is an individual part of the second virtual object, and the method is After detecting the first input, a second input is detected that corresponds to attention directed towards a location in the three-dimensional environment that corresponds to an empty space in the three-dimensional environment, In response to detecting the second input, and in accordance with the determination that at least a portion of the first virtual object overlaps the second virtual object by a threshold amount from the user's current viewpoint, Displaying the individual parts of the second virtual object with the first visual prominence in the three-dimensional environment, The method according to any one of claims 1 to 8, further comprising displaying individual parts of the first virtual object with the second visual prominence in relation to the three-dimensional environment.
  10. The method according to any one of claims 1 to 9, further comprising moving the individual virtual object from a first location in the three-dimensional environment to a second location in the three-dimensional environment in response to the detection of the first input, wherein the movement of the individual virtual object causes at least the portion of the first virtual object to overlap with the second virtual object.
  11. The method according to any one of claims 1 to 9, wherein detecting the first input includes detecting a movement of the user's current viewpoint from a first viewpoint to the three-dimensional environment to a second viewpoint to the three-dimensional environment, and the movement of the user's current viewpoint to the three-dimensional environment causes at least the portion of the first virtual object to overlap with the second virtual object from the user's current viewpoint.
  12. According to the determination that the difference between the distance between the first virtual object and the user's current viewpoint and the distance between the second virtual object and the user's current viewpoint is the first distance, the threshold amount is the first threshold amount. The method according to any one of claims 1 to 11, wherein the threshold amount is a second threshold amount different from the first threshold amount, according to the determination that the difference between the first virtual object and the user's current viewpoint, and between the second virtual object and the user's current viewpoint, is a second distance different from the first distance.
  13. As the first distance is greater than the second distance, the first threshold amount is greater than the second threshold amount. The method according to claim 12, wherein the second threshold amount is greater than the first threshold amount as the second distance is greater than the first distance.
  14. Displaying the individual portion of the individual virtual object among the plurality of virtual objects with the first visual prominence in the three-dimensional environment includes displaying the individual portion of the individual virtual object with a first value of the first visual characteristic, The method according to any one of claims 1 to 13, wherein displaying the individual portion of the individual virtual object among the plurality of virtual objects with respect to the three-dimensional environment with respect to the second visual prominence includes displaying the individual portion of the individual virtual object with respect to the first visual characteristic with respect to a second value smaller than the first value.
  15. Displaying the individual portion of the individual virtual object with the second visual prominence in the three-dimensional environment includes discontinuing the display of the first portion of the individual portion of the individual virtual object in the three-dimensional environment, wherein the first portion of the individual portion of the individual virtual object has a relative size corresponding to the relative size of at least the portion of the first virtual object that overlaps the second virtual object, according to any one of claims 1 to 14.
  16. The method according to claim 15, wherein displaying the individual portion of the individual virtual object with the second visual prominence in the three-dimensional environment includes displaying the second portion of the individual portion of the individual virtual object having a greater amount of transparency compared to displaying the second portion of the individual portion of the individual virtual object with the first visual prominence, the second portion of the individual portion of the individual virtual object surrounding the first portion of the individual portion of the individual virtual object.
  17. Displaying the individual parts of the individual virtual objects with the second visual prominence in the three-dimensional environment is, while the first virtual object is an active virtual object that overlaps with the second virtual object, In accordance with the determination that the first virtual object is further from the user's viewpoint than the second virtual object, the display of individual parts of the second virtual object in the three-dimensional environment is discontinued. The method according to any one of claims 1 to 15, comprising maintaining the display of the individual parts of the second virtual object in the three-dimensional environment in accordance with the determination that the first virtual object is closer to the user's viewpoint than the second virtual object.
  18. In response to the detection of the first input, the system determines that a first portion of the third virtual object among the plurality of virtual objects overlaps the first virtual object by a threshold amount from the user's current viewpoint, and a second portion of the third virtual object overlaps the second virtual object by a threshold amount from the user's current viewpoint, Displaying a first individual part of a first individual virtual object among the plurality of virtual objects with the second visual prominence, The method according to any one of claims 1 to 17, further comprising displaying a second individual portion of a second individual virtual object among the plurality of virtual objects with the second visual prominence.
  19. Displaying the aforementioned multiple virtual objects means In accordance with the determination that the first virtual object is an active virtual object, the first virtual object is displayed with the first visual prominence, regardless of whether the first virtual object overlaps with other virtual objects. The method according to any one of claims 1 to 18, wherein, in accordance with the determination that the second virtual object is an active virtual object, the second virtual object is displayed with the first visual prominence, regardless of whether the first virtual object overlaps with other virtual objects.
  20. While the individual virtual objects are displayed with the second visual prominence, a second input is detected that corresponds to a request to move a virtual element in the three-dimensional environment toward a location associated with the individual virtual object in the three-dimensional environment. The method according to any one of claims 1 to 19, further comprising: detecting the second input, moving the virtual element in the three-dimensional environment in accordance with the movement associated with the second input while the individual virtual object is displayed with the second visual prominence;

Description

(Cross-reference of related applications) This application claims the interests of U.S. Provisional Patent Application No. 63/587,442 filed on 2 October 2023, U.S. Provisional Patent Application No. 63/515,119 filed on 23 July 2023, U.S. Provisional Patent Application No. 63/506,128 filed on 4 June 2023, and U.S. Provisional Patent Application No. 63/506,109 filed on 4 June 2023, the contents of which are incorporated herein by reference in their entirety for all purposes. This invention generally relates to computer systems that provide computer-generated experiences, including but not limited to electronic devices that provide virtual reality and mixed reality experiences via a display. The development of computer systems for augmented reality has increased significantly in recent years. An exemplary augmented reality environment includes at least several virtual elements that replace or enhance the physical world. Input devices such as cameras, controllers, joysticks, touch-sensitive surfaces, and touchscreen displays for computer systems and other electronic computing devices are used to interact with the virtual/augmented reality environment. Exemplary virtual elements include virtual objects such as digital images, videos, text, icons, and control elements such as buttons and other graphics. Some methods and interfaces for interacting with environments containing at least some virtual elements (e.g., applications, augmented reality environments, mixed reality environments, and virtual reality environments) are cumbersome, inefficient, and restrictive. For example, systems that provide insufficient feedback for performing actions associated with virtual objects, systems that require a series of inputs to achieve desired results in augmented reality environments, and systems where manipulating virtual objects is complex and error-prone impose a significant cognitive burden on the user and detract from the virtual/augmented reality experience. Furthermore, these methods are unnecessarily time-consuming, thereby wasting the energy of the computer system. This latter consideration is particularly important in battery-powered devices. Therefore, there is a need for computer systems with improved methods and interfaces to provide users with computer-generated experiences that make interaction with the computer system more efficient and intuitive. Such methods and interfaces can optionally complement or replace conventional methods of providing users with extended reality experiences. Such methods and interfaces reduce the number, extent, and/or types of user input by assisting the user in understanding the connection between the inputs provided and the device's response to those inputs, thereby generating a more efficient human-machine interface. The above-mentioned drawbacks and other problems associated with the user interface of a computer system are mitigated or eliminated by the disclosed system. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is a portable device (e.g., a notebook computer, a tablet computer, or a handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device such as a wristwatch or a head-mounted device). In some embodiments, the computer system has a touchpad. In some embodiments, the computer system has one or more cameras. In some embodiments, the computer system has (e.g., includes or communicates with) a display generating component (e.g., a display device such as a head-mounted device (HMD), a display, a projector, a touch-sensitive display (also known as a “touchscreen” or “touchscreen display”), or other devices or components that present visual content to the user that is visible on or in the display generating component itself or generated from the display generating component and is visible elsewhere). In some embodiments, the computer system has one or more eye-tracking components. In some embodiments, the computer system has one or more hand-tracking components. In some embodiments, the computer system has one or more output devices in addition to the display generation components, and the output devices include one or more tactile output generators and/or one or more audio output devices. In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory, and one or more modules, programs, or instruction sets stored in memory for performing a plurality of functions. In some embodiments, the user interacts with the GUI (and/or computer system) through stylus and/or finger touch and gestures on a touch-sensitive surface, the GUI (and/or computer system) as captured by cameras and other motion sensors, or the user's eye and hand movements in space relative to the user's body, and/or audio input as captured by one or more audio input devices. In some embodiments, the functions performed