US-12620184-B2 - Devices, methods, and graphical user interfaces for interacting with three-dimensional environments
Abstract
While displaying an application user interface, in response to detecting a first input to an input device a computer system, in accordance with a determination that the application user interface is in a first mode of display, wherein the first mode of display includes an immersive mode in which only content of the application user interface is displayed, displays via the display generation component the application user interface in a second mode of display, wherein the second mode of display includes a non-immersive mode in which respective content of the application user interface and other content are concurrently displayed, and in accordance with a determination that the application user interface is in the second mode of display, the computer system replaces display of at least a portion of the application user interface by displaying a home menu user interface via the display generation component.
Inventors
- Stephen O. Lemay
- Amy E. Dedonato
- Israel Pastrana Vicente
- Nathan Gitter
- Zoey C. Taylor
Assignees
- APPLE INC.
Dates
- Publication Date
- 20260505
- Application Date
- 20230918
Claims (20)
- 1 . A method, comprising: at a computer system that includes or is in communication with a display generation component and one or more input devices: while displaying via the display generation component an application user interface of a first application, detecting a first input to an input device of the one or more input devices; and in response to detecting the first input to the input device: in accordance with a determination that the application user interface is in a first mode of display, wherein the first mode of display comprises an immersive mode in which the application user interface is displayed in a three-dimensional environment, displaying via the display generation component the application user interface in a second mode of display, wherein the second mode of display comprises a non-immersive mode in which the application user interface is displayed at a different size that reveals at least a portion of the three-dimensional environment that was previously occluded by the application user interface displayed in the immersive mode; and in accordance with a determination that the application user interface is in the second mode of display, displaying a home menu user interface in at least a portion of the three-dimensional environment via the display generation component.
- 2 . The method of claim 1 , further comprising: while displaying the home menu user interface via the display generation component, detecting a second input to the input device; and in response to detecting the second input to the input device, dismissing the home menu user interface.
- 3 . The method of claim 1 , wherein displaying the application user interface in the non-immersive mode includes concurrently displaying a virtual environment and the application user interface, and in response to detecting the first input to the input device while the application user interface is displayed in the non-immersive mode, at least a portion of the virtual environment continues to be displayed.
- 4 . The method of claim 3 , further comprising: continuing to display at least the portion of the virtual environment while the home menu user interface is displayed.
- 5 . The method of claim 3 , further comprising: displaying, in the home menu user interface, representations of two or more virtual environments; and in response to detecting a selection of a first virtual environment of the two or more virtual environments: replacing at least a respective portion of the virtual environment with the first virtual environment.
- 6 . The method of claim 1 , further including: displaying, in the home menu user interface, representations of software applications executable on the computer system; detecting a second input directed to a respective representation of a software application in the representations of software applications executable on the computer system displayed in the home menu user interface; and in response to detecting the second input directed to the respective representation of the software application: displaying an application user interface of the software application.
- 7 . The method of claim 1 , further including: displaying, in the home menu user interface, a first representation of a first person, and a second representation of a second person, the first representation and the second representation for initiating communication with the first person and the second person, respectively; detecting a second input directed to the first representation of the first person; and in response to detecting the second input directed to the first representation of the first person: displaying a communication user interface for initiating a communication session with the first person.
- 8 . The method of claim 1 , further including: displaying, in the home menu user interface, representations of one or more virtual three-dimensional environments, and/or one or more extended reality environments; detecting a second input directed to a respective representation of the representations of one or more virtual three-dimensional environments, and/or one or more extended reality environments; and in response to detecting the second input directed to the respective representation of the representations of one or more three-dimensional environments, and/or one or more extended reality environments: replacing a currently displayed three-dimensional environment with a virtual three-dimensional environment, or an extended reality environment associated with the respective representation.
- 9 . A non-transitory computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, the one or more programs including instructions for: while displaying via the display generation component an application user interface of a first application, detecting a first input to an input device of the one or more input devices; and in response to detecting the first input to the input device: in accordance with a determination that the application user interface is in a first mode of display, wherein the first mode of display comprises an immersive mode in which the application user interface is displayed in a three-dimensional environment, displaying via the display generation component the application user interface in a second mode of display, wherein the second mode of display comprises a non-immersive mode in which the application user interface is displayed at a different size that reveals at least a portion of the three-dimensional environment that was previously occluded by the application user interface displayed in the immersive mode; and in accordance with a determination that the application user interface is in the second mode of display, displaying a home menu user interface in at least a portion of the three-dimensional environment via the display generation component.
- 10 . The non-transitory computer-readable storage medium of claim 9 , wherein the one or more programs further include instructions that, when executed by the computer system, cause the computer system to: while displaying the home menu user interface via the display generation component, detect a second input to the input device; and in response to detecting the second input to the input device, dismiss the home menu user interface.
- 11 . The non-transitory computer-readable storage medium of claim 9 , wherein displaying the application user interface in the non-immersive mode includes concurrently displaying a virtual environment and the application user interface, and in response to detecting the first input to the input device while the application user interface is displayed in the non-immersive mode, at least a portion of the virtual environment continues to be displayed.
- 12 . The non-transitory computer-readable storage medium of claim 11 , wherein the one or more programs further include instructions that, when executed by the computer system, cause the computer system to: continue to display at least the portion of the virtual environment while the home menu user interface is displayed.
- 13 . The non-transitory computer-readable storage medium of claim 11 , wherein the one or more programs further include instructions that, when executed by the computer system, cause the computer system to: display, in the home menu user interface, representations of two or more virtual environments; and in response to detecting a selection of a first virtual environment of the two or more virtual environments: replace at least a respective portion of the virtual environment with the first virtual environment.
- 14 . The non-transitory computer-readable storage medium of claim 9 , wherein the one or more programs further include instructions that, when executed by the computer system, cause the computer system to: display, in the home menu user interface, representations of software applications executable on the computer system; detect a second input directed to a respective representation of a software application in the representations of software applications executable on the computer system displayed in the home menu user interface; and in response to detecting the second input directed to the respective representation of the software application: display an application user interface of the software application.
- 15 . The non-transitory computer-readable storage medium of claim 9 , wherein the one or more programs further include instructions that, when executed by the computer system, cause the computer system to: display, in the home menu user interface, a first representation of a first person, and a second representation of a second person, the first representation and the second representation for initiating communication with the first person and the second person, respectively; detect a second input directed to the first representation of the first person; and in response to detecting the second input directed to the first representation of the first person: display a communication user interface for initiating a communication session with the first person.
- 16 . The non-transitory computer-readable storage medium of claim 9 , wherein the one or more programs further include instructions that, when executed by the computer system, cause the computer system to: display, in the home menu user interface, representations of one or more virtual three- dimensional environments, and/or one or more extended reality environments; detect a second input directed to a respective representation of the representations of one or more virtual three-dimensional environments, and/or one or more extended reality environments; and in response to detecting the second input directed to the respective representation of the representations of one or more virtual three-dimensional environments, and/or one or more extended reality environments: replace a currently displayed three-dimensional environment with a virtual three- dimensional environment, or an extended reality environment associated with the respective representation.
- 17 . A computer system that is in communication with a display generation component and one or more input devices, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying via the display generation component an application user interface of a first application, detecting a first input to an input device of the one or more input devices; and in response to detecting the first input to the input device: in accordance with a determination that the application user interface is in a first mode of display, wherein the first mode of display comprises an immersive mode in which the application user interface is displayed in a three-dimensional environment, displaying via the display generation component the application user interface in a second mode of display, wherein the second mode of display comprises a non-immersive mode in which the application user interface is displayed at a different size that reveals at least a portion of the three-dimensional environment that was previously occluded by the application user interface displayed in the immersive mode; and in accordance with a determination that the application user interface is in the second mode of display, displaying a home menu user interface in at least a portion of the three-dimensional environment via the display generation component.
- 18 . The computer system of claim 17 , the one or more programs further including instructions for: while displaying the home menu user interface via the display generation component, detecting a second input to the input device; and in response to detecting the second input to the input device, dismissing the home menu user interface.
- 19 . The computer system of claim 17 , wherein displaying the application user interface in the non-immersive mode includes concurrently displaying a virtual environment and the application user interface, and in response to detecting the first input to the input device while the application user interface is displayed in the non-immersive mode, at least a portion of the virtual environment continues to be displayed.
- 20 . The computer system of claim 19 , further comprising: continuing to display at least the portion of the virtual environment while the home menu user interface is displayed.
Description
RELATED APPLICATIONS This application claims priority to U.S. Provisional Application 63/470,921, filed Jun. 4, 2023, and U.S. Provisional Application 63/409,748, filed Sep. 24, 2022, each of which is hereby incorporated by reference in its entirety. TECHNICAL FIELD The present disclosure relates generally to computer systems that are in communication with a display generation component and, one or more input devices that provide computer-generated experiences, including, but not limited to, electronic devices that provide virtual reality and mixed reality experiences via a display. BACKGROUND The development of computer systems for augmented reality has increased significantly in recent years. Example augmented reality environments include at least some virtual elements that replace or augment the physical world. Input devices, such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch-screen displays for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments. Example virtual elements include virtual objects, such as digital images, video, text, icons, and control elements such as buttons and other graphics. SUMMARY Some methods and interfaces for interacting with environments that include at least some virtual elements (e.g., applications, extended reality environments that include augmented reality environments, mixed reality environments, and virtual reality environments) are cumbersome, inefficient, and limited. For example, systems that provide insufficient avenues or mechanisms for performing actions associated with navigating within an extended reality environment, systems that require a series of inputs to achieve a desired outcome in the extended reality environment, and systems in which manipulation of virtual objects are complex, tedious, and error-prone, create a significant cognitive burden on a user, and detract from the experience with the virtual/augmented reality environment. In addition, these methods take longer than necessary, thereby wasting energy of the computer system. This latter consideration is particularly important in battery-operated devices. Accordingly, there is a need for computer systems with improved methods and interfaces for providing computer-generated experiences to users that make interaction with the computer systems more efficient and intuitive for a user. Such methods and interfaces optionally complement or replace conventional methods for providing extended reality experiences to users. Such methods and interfaces reduce the number, extent, and/or nature of the inputs from a user by helping the user to understand the connection between provided inputs and device responses to the inputs, thereby creating a more efficient human-machine interface. The above deficiencies and other problems associated with user interfaces for computer systems are reduced or eliminated by the disclosed systems. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device, such as a watch, or a head-mounted device). In some embodiments, the computer system has a touchpad. In some embodiments, the computer system has one or more cameras. In some embodiments, the computer system has a touch-sensitive display (also known as a “touch screen” or “touch-screen display”). In some embodiments, the computer system has one or more eye-tracking components. In some embodiments, the computer system has one or more hand-tracking components. In some embodiments, the computer system has one or more output devices in addition to the display generation component, the output devices including one or more tactile output generators and/or one or more audio output devices. In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some embodiments, the user interacts with the GUI through a stylus and/or finger contacts and gestures on the touch-sensitive surface, movement of the user's eyes and hand in space relative to the GUI (and/or computer system) or the user's body as captured by cameras and other movement sensors, and/or voice inputs as captured by one or more audio input devices. In some embodiments, the functions performed through the interactions optionally include image editing, drawing, presenting, word processing, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographing, digital videoing, web browsing, digital music playing, note taking, and/or digital video playing. Executable instructions for performing these