US-12619303-B2 - Gaze based interactions with three-dimensional environments
Abstract
The present disclosure generally relates to techniques and user interfaces for performing one or more wake operations, displaying content associated with an external device, performing one or more operations based on an input scheme, displaying virtual objects for controlling a camera setting, providing navigation guidance, displaying virtual objects associated with an external device, navigating a user interface, displaying virtual objects for performing a physical activity, displaying virtual objects for controlling one or more external devices, providing guidance for a physical activity, displaying virtual objects to perform one or more operations, and/or controlling the orientation of virtual objects.
Inventors
- Allison W. Dryer
- Giovanni M. Agnoli
- Yiqiang Nie
- Michael B. Tucker
- Giancarlo Yerkes
Assignees
- APPLE INC.
Dates
- Publication Date
- 20260505
- Application Date
- 20240214
Claims (20)
- 1 . A computer system that is configured to communicate with one or more gaze-tracking sensors and a display generation component, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: detecting, via the one or more gaze-tracking sensors, that attention of a user is directed toward a first location; in response to detecting that the attention of the user is directed toward the first location, displaying, via the display generation component, a first virtual object at a position that is locked relative to the head of the user of the computer system, wherein the first virtual object is displayed with a first visual appearance; while displaying, via the display generation component, the first virtual object at the position that is locked relative to the head of the user of the computer system, detecting, via the one or more gaze-tracking sensors, that the attention of the user is directed toward a second location that is different from the first location; and in response to detecting, via the one or more gaze-tracking sensors, that the attention of the user is directed toward the second location: in accordance with a determination that the attention of the user that is directed toward the second location is directed to the first virtual object for a first predetermined period of time, changing an appearance of the first virtual object from the first visual appearance to a second visual appearance; and in accordance with a determination that the attention of the user that is directed toward the second location is directed to the first virtual object for a second predetermined period of time that is different from the first predetermined period of time, displaying, via the display generation component, a first user interface that includes a second virtual object and a third virtual object, wherein selection of the second virtual object causes display of a second user interface that is different from the first user interface, and wherein selection of the third virtual object causes display of a third user interface that is different from the first user interface and the second user interface.
- 2 . The computer system of claim 1 , wherein the first user interface that includes the second virtual object and the third virtual object does not include the first virtual object.
- 3 . The computer system of claim 1 , wherein the one or more programs further include instructions for: in response to detecting, via the one or more gaze-tracking sensors, that the attention of the user is directed toward the second location and in accordance with the determination that the attention of the user that is directed toward the second location is directed to the first virtual object for the second predetermined period of time, ceasing, via the display generation component, to display the first virtual object.
- 4 . The computer system of claim 3 , wherein before detecting, via the one or more gaze-tracking sensors, that the attention of the user is directed toward the second location, the first virtual object is displayed at a third location, and wherein displaying, via the display generation component, the first user interface includes: displaying, via the display generation component, the second virtual object at the third location.
- 5 . The computer system of claim 1 , wherein the one or more programs further include instructions for: while displaying, via the display generation component, the first virtual object at the position that is locked relative to the head of the user of the computer system, detecting, via the one or more gaze-tracking sensors, that the attention of the user is directed toward a third location that is different from the second location; and in response to detecting, via the one or more gaze-tracking sensors, that the attention of the user is directed toward the third location, ceasing to display, via the display generation component, the first virtual object.
- 6 . The computer system of claim 1 , wherein changing the appearance of the first virtual object from the first visual appearance to the second visual appearance includes displaying, via the display generation component, an animation that indicates progress towards completion of a wake operation while the attention of the user that is directed toward the second location is directed to the first virtual object.
- 7 . The computer system of claim 6 , wherein displaying the animation includes changing a first size of the first virtual object over a first period of time while the attention of the user that is directed toward the second location is directed to the first virtual object.
- 8 . The computer system of claim 6 , wherein displaying the animation includes changing a first amount of color that fills up the first virtual object over a second period of time while the attention of the user that is directed toward the second location is directed to the first virtual object.
- 9 . The computer system of claim 6 , wherein displaying the animation includes: changing a second amount of color that fills up the first virtual object over a third period of time while the attention of the user that is directed toward the second location is directed to the first virtual object; and after changing the second amount of color that fills up the first virtual object over the third period of time, increasing a second size of the first virtual object over a fourth period of time while the attention of the user that is directed toward the second location is directed to the first virtual object.
- 10 . The computer system of claim 1 , wherein: in accordance with a determination that the attention of the user that is directed toward the first location is directed to a first predetermined portion of a first user interface region, the position that is locked relative to the head of the user of the computer system is associated with the first predetermined portion of the first user interface region; and in accordance with a determination that the attention of the user that is directed toward the first location is directed to a second predetermined portion of the first user interface region that is different from the first predetermined portion of the first user interface region, the position that is locked relative to the head of the user of the computer system is associated with the second predetermined portion of the first user interface region.
- 11 . The computer system of claim 10 , wherein the first predetermined portion of the first user interface region is on a first side of the first user interface region, and wherein the second predetermined portion of the first user interface region is on a second side of the first user interface region that is different from the first side of the first user interface region.
- 12 . The computer system of claim 10 , wherein the first predetermined portion of the first user interface region is on a third side of the first user interface region, and wherein the second predetermined portion of the first user interface region is in a corner of the first user interface region.
- 13 . The computer system of claim 10 , wherein: in accordance with the determination that the attention of the user that is directed toward the second location is directed to the first virtual object for the second predetermined period of time: in accordance with a determination that attention of the user that is directed toward the second location is directed to the first predetermined portion of the first user interface region, the first user interface is displayed at a location in the first predetermined portion of the first user interface region; and in accordance with a determination that attention of the user that is directed toward the second location is directed to the second predetermined portion of the first user interface region, the first user interface is displayed at a location in the second predetermined portion of the first user interface region.
- 14 . The computer system of claim 1 , wherein the one or more programs further include instructions for: before detecting, via the one or more gaze-tracking sensors, that the attention of the user is directed toward the first location and before displaying the first virtual object at the position that is locked relative to the head of the user of the computer system, detecting, via the one or more gaze-tracking sensors, that the attention of the user is directed toward a fourth location that is different from the first location and is directed to a third predetermined portion of a second user interface region; and in response to detecting that the attention of the user is directed toward the fourth location and is directed to the third predetermined portion of the second user interface region: in accordance with a determination that a respective setting is enabled for performing a wake operation based on a detected attention of the user being directed to the third predetermined portion of the second user interface region, displaying, via the display generation component, the first virtual object; and in accordance with a determination that the respective setting is disabled for performing the wake operation based on the detected attention of the user being directed to the third predetermined portion of the second user interface region, forgoing displaying, via the display generation component, the first virtual object.
- 15 . The computer system of claim 1 , wherein the second virtual object includes first status information.
- 16 . The computer system of claim 1 , wherein the second virtual object and the third virtual object are included in a first menu.
- 17 . The computer system of claim 1 , wherein the first user interface is a user interface of a last used application.
- 18 . The computer system of claim 1 , wherein the first user interface is a wake screen user interface.
- 19 . The computer system of claim 1 , wherein the first user interface is a home screen user interface.
- 20 . The computer system of claim 1 , wherein the computer system is operating in a first power mode before detecting that the attention of the user is directed toward the first location, wherein the one or more programs further include instructions for: in response to detecting that the attention of the user is directed toward the first location, transitioning from operating in the first power mode to operating in a second power mode that is different from the first power mode; and while operating in the second power mode and in response to detecting, via the one or more gaze-tracking sensors, that the attention of the user is directed toward the second location: in accordance with a determination that the attention of the user that is directed toward the second location is directed to the first virtual object for the second predetermined period of time, transitioning from operating in the second power mode to operating in a third power mode that is different from the first power mode and the second power mode.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of PCT Patent Application Serial No. PCT/US2022/044236, entitled “GAZED BASED INTERACTIONS WITH THREE-DIMENSIONAL ENVIRONMENTS,” filed on Sep. 21, 2022, which claims priority to U.S. Patent Application Ser. No. 63/314,228, entitled “GAZED BASED INTERACTIONS WITH THREE-DIMENSIONAL ENVIRONMENTS,” filed on Feb. 25, 2022, and to U.S. Patent Application Ser. No. 63/248,471, entitled “GAZED BASED INTERACTIONS WITH THREE-DIMENSIONAL ENVIRONMENTS,” filed on Sep. 25, 2021. The contents of each of these applications are hereby incorporated by reference in their entireties. TECHNICAL FIELD The present disclosure relates generally to computer systems that are in communication with a display generation component. The computer systems are optionally in communication with one or more external devices, one or more gaze tracking sensors, one or more physical input mechanisms, such as one or more routable input mechanisms, one or more inputs devices, one or more cameras, one or more display projectors, one or more audio output devices, one or more touch-sensitive surfaces, and/or that provide computer-generated experiences, including, but not limited to, electronic devices that provide virtual reality and mixed reality experiences via a display. BACKGROUND The development of computer systems for augmented reality has increased significantly in recent years. Example augmented reality environments include at least some virtual elements that replace or augment the physical world. Input devices, such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch-screen displays for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments. Example virtual elements include virtual objects, such as digital images, video, text, icons, and control elements such as buttons and other graphics. SUMMARY Some methods and interfaces for interacting with environments that include at least some virtual elements (e.g., applications, augmented reality environments, mixed reality environments, and virtual reality environments) are cumbersome, inefficient, and limited. For example, systems that provide inefficient input schemes for interacting with and/or managing virtual objects, systems that provide insufficient feedback for performing actions associated with virtual objects, systems that require a series of inputs to achieve a desired outcome in an augmented reality environment, and systems in which manipulation of virtual objects are complex, tedious, and error-prone, create a significant cognitive burden on a user, and detract from the experience with the virtual/augmented reality environment. In addition, these methods take longer than necessary, thereby wasting energy of the computer system. This latter consideration is particularly important in battery-operated devices. Accordingly, there is a need for computer systems with improved methods and interfaces for providing computer-generated experiences to users that make interaction with the computer systems more efficient and intuitive for a user. Such methods and interfaces optionally complement or replace conventional methods for providing extended reality experiences to users. Such methods and interfaces reduce the number, extent, and/or nature of the inputs from a user by helping the user to understand the connection between provided inputs and device responses to the inputs, thereby creating a more efficient human-machine interface. The above deficiencies and other problems associated with user interfaces for computer systems are reduced or eliminated by the disclosed systems. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is a portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device, such as a watch or a head-mounted device). In some embodiments, the computer system has a touchpad. In some embodiments, the computer system has one or more cameras. In some embodiments, the computer system has a touch-sensitive display (also known as a “touch screen” or “touch-screen display”). In some embodiments, the computer system has one or more eye-tracking components. In some embodiments, the computer system has one or more hand-tracking components. In some embodiments, the computer system has one or more output devices in addition to the display generation component, the output devices including one or more tactile output generators and/or one or more audio output devices. In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some embodiments, the user interacts wit