Search

EP-4738090-A2 - DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR INTERACTING WITH THREE-DIMENSIONAL ENVIRONMENTS

EP4738090A2EP 4738090 A2EP4738090 A2EP 4738090A2EP-4738090-A2

Abstract

A computer system detects a gaze input directed to a region in an environment and, while detecting the gaze input, detects a touch input. In response, the computer system displays a focus indicator at a location corresponding to the region. The computer system detects a continuation of the touch input that includes movement of the touch input along an input surface while being maintained on the input surface. In response, the computer system moves the focus indicator in accordance with the movement of the touch input: within a user interface of an application, if the movement corresponds to a request to move the focus indicator within the user interface; and within the user interface without moving the focus indicator outside of the boundary of the user interface, if the movement corresponds to a request to move the focus indicator outside of a boundary of the user interface.

Inventors

  • RAVASZ, JONATHAN
  • PASTRANA VICENTE, Israel
  • LEMAY, STEPHEN, O.
  • BAUERLY, Kristi, E.S.
  • TAYLOR, Zoey, C.

Assignees

  • Apple Inc.

Dates

Publication Date
20260506
Application Date
20230920

Claims (15)

  1. A method, comprising: at a computer system that is in communication with a display generation component and one or more input devices: while a view of an environment is visible via the display generation component: displaying a user interface that includes a first user interface region and a second user interface region, wherein the first user interface region and the second user interface region are separated by a respective region; and displaying a focus indicator within the first user interface region; detecting, via the one or more input devices, an input to move the focus indicator relative to the user interface, wherein the input is associated with movement toward the second user interface region; in response to detecting the input that is associated with the movement toward the second user interface region: in accordance with a determination that the input meets a first set of one or more criteria based on the movement associated with the input: moving the focus indicator from the first user interface region to the second user interface region in accordance with the movement associated with the input, including transitioning directly from displaying the focus indicator at a position corresponding to a boundary of the first user interface region to displaying the focus indicator at a position corresponding to the second user interface region without displaying the focus indicator in the respective region between the first user interface region and the second user interface region; and in accordance with a determination that the input does not meet the first set of one or more criteria based on the movement associated with the input: changing an appearance of the focus indicator in accordance with the movement associated with the input while continuing to display at least a portion of the focus indicator within the first user interface region.
  2. The method of claim 1, including: detecting an end of the input; and in response to detecting the end of the input: in accordance with a determination that the input did not meet the first set of one or more criteria based on the movement associated with the input, displaying the focus indicator entirely within the first user interface region.
  3. The method of any of claims 1-2, including: moving the focus indicator from a first location in the first user interface region to a second location in the first user interface region in accordance with a first magnitude of movement of the input; and moving the focus indicator from the second location in the first user interface region to a third location in the first user interface region in accordance with the first magnitude of movement of the input; wherein: the third location is closer to the second user interface region than the second location is to the second user interface region; and a distance between the third location and the second location is less than a distance between the second location and the first location.
  4. The method of any of claims 1-3, wherein determining that the input meets the first set of one or more criteria based on the movement associated with the input includes determining that a velocity of the input satisfies a threshold velocity.
  5. The method of any of claims 1-4, wherein determining that the input meets the first set of one or more criteria based on the movement associated with the input includes determining that a magnitude of movement of the input satisfies a threshold distance.
  6. The method of any of claims 1-5, wherein changing the appearance of the focus indicator includes forgoing displaying a portion of the focus indicator that is outside of the first user interface region.
  7. The method of any of claims 1-6, wherein the first user interface region corresponds to a respective application, and the second user interface region corresponds to the respective application.
  8. The method of any of claims 1-6, wherein the first user interface region corresponds to a first application, and the second user interface region corresponds to a second application that is different from the first application.
  9. The method of any of claims 1-8, wherein displaying the focus indicator at the position corresponding to the second user interface region includes displaying the focus indicator entirely within the second user interface region.
  10. The method of any of claims 1-9, wherein: moving the focus indicator from the first user interface region to the second user interface region in accordance with the movement associated with the input is performed in accordance with a determination that the first user interface region and the second user interface region are separated by less than a second threshold distance; and the method includes: in accordance with a determination that the first user interface region and the second user interface region are separated by more than the second threshold distance: changing the appearance of the focus indicator in accordance with the movement associated with the input while continuing to display at least a portion of the focus indicator within the first user interface region.
  11. The method of any of claims 1-10, wherein: the input is associated with movement in a first direction toward the second user interface region; and moving the focus indicator from the first user interface region to the second user interface region in accordance with the movement associated with the input includes: moving the focus indicator in the first direction; and in accordance with a determination that a boundary of the second user interface region is offset from the boundary of the first user interface region in a second direction that is different from the first direction, moving the focus indicator in the second direction.
  12. The method of any of claims 1-11, wherein changing the appearance of the focus indicator in accordance with the movement associated with the input includes: in accordance with a determination that the movement associated with the input moves the focus indicator to a location in the user interface that corresponds to an activatable user interface element, ceasing to display the focus indicator and displaying a visual emphasis of the activatable user interface element.
  13. The method of any of claims 1-12, wherein the computer system is in communication with one or more tactile output generators, and the method includes: in response to detecting the input that is associated with the movement toward the second user interface region: in accordance with the determination that the input meets the first set of one or more criteria based on the movement associated with the input: in conjunction with moving the focus indicator from the first user interface region to the second user interface region in accordance with the movement associated with the input, generating, via the one or more tactile output generators, a tactile output.
  14. A computer system that is configured for communication with a display generation component and one or more input devices, the computer system comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for performing the method of any of claims 1-13.
  15. A computer-readable storage medium storing one or more programs configured to be executed by one or more processors of a computer system that is in communication with a display generation component and one or more input devices, the one or more programs including instructions for performing the method of any of claims 1-13.

Description

TECHNICAL FIELD The present disclosure relates generally to computer systems that are in communication with a display generation component and one or more input devices that provide computer-generated experiences, including, but not limited to, electronic devices that provide virtual reality and mixed reality experiences via a display. BACKGROUND The development of computer systems for augmented reality has increased significantly in recent years. Example augmented reality environments include at least some virtual elements that replace or augment the physical world. Input devices, such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch-screen displays for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments. Example virtual elements include virtual objects, such as digital images, video, text, icons, and control elements such as buttons and other graphics. SUMMARY Some methods and interfaces for interacting with environments that include at least some virtual elements (e.g., applications, augmented reality environments, mixed reality environments, and virtual reality environments) are cumbersome, inefficient, and limited. For example, systems that allow only a limited number of ways of providing inputs, systems that require extensive input to move focus and drag objects around in an environment, and systems in which moving focus around an environment is difficult to control, particularly systems in which the available ways of moving focus within an interaction target are inconsistent with the object type of the interaction target are complex, tedious, and error-prone, create a significant cognitive burden on a user, and detract from the experience with the virtual/augmented reality environment. In addition, these methods take longer than necessary, thereby wasting energy of the computer system. This latter consideration is particularly important in battery-operated devices. Accordingly, there is a need for computer systems with improved methods and interfaces for enabling the use of additional input mechanisms to move focus and dragging objects around in an environment with increased speed and precision, to make interaction with the computer systems more efficient and intuitive for a user. Such methods and interfaces optionally complement or replace conventional methods for providing extended reality experiences to users. Such methods and interfaces reduce the number, extent, and/or nature of the inputs from a user by helping the user to understand the connection between provided inputs and device responses to the inputs, thereby creating a more efficient human-machine interface. The above deficiencies and other problems associated with user interfaces for computer systems are reduced or eliminated by the disclosed systems. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device, such as a watch, or a head-mounted device). In some embodiments, the computer system has a touchpad. In some embodiments, the computer system has one or more cameras. In some embodiments, the computer system has a touch-sensitive display (also known as a "touch screen" or "touch-screen display"). In some embodiments, the computer system has one or more eye-tracking components. In some embodiments, the computer system has one or more hand-tracking components. In some embodiments, the computer system has one or more output devices in addition to the display generation component, the output devices including one or more tactile output generators and/or one or more audio output devices. In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some embodiments, the user interacts with the GUI through a stylus and/or finger contacts and gestures on the touch-sensitive surface, movement of the user's eyes and hand in space relative to the GUI (and/or computer system) or the user's body as captured by cameras and other movement sensors, and/or voice inputs as captured by one or more audio input devices. In some embodiments, the functions performed through the interactions optionally include image editing, drawing, presenting, word processing, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographing, digital videoing, web browsing, digital music playing, note taking, and/or digital video playing. Executable instructions for performing these functions are, optionally, included in a transitory and/or non-transitory computer readable storage medi