Search

US-12625608-B2 - Methods for interacting with user interfaces based on attention

US12625608B2US 12625608 B2US12625608 B2US 12625608B2US-12625608-B2

Abstract

A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.

Inventors

  • Israel Pastrana Vicente
  • Evgenii Krivoruchko
  • Kristi E. Bauerly
  • Lorena S. PAZMINO

Assignees

  • APPLE INC.

Dates

Publication Date
20260512
Application Date
20230922

Claims (20)

  1. 1 . A method comprising: at a computer system in communication with a display generation component and one or more input devices: displaying, via the display generation component, a user interface including a virtual object; while displaying the user interface, detecting, via the one or more input devices, a first user input that includes attention directed towards a first portion of the virtual object; and while detecting the first user input: in accordance with a determination that one or more first criteria are satisfied, including a criterion that is satisfied when the first user input includes a respective type of input from a first portion of a user at a location that corresponds to the first portion of the virtual object, displaying, via the display generation component, a first visual feedback indicating a location of the attention of the user in the first portion of the virtual object; and in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied when the first user input includes the attention of the user directed towards the first portion of the virtual object without the respective type of input from the user directed towards the first portion of the virtual object, displaying second visual feedback that is different from the first visual feedback without displaying the first visual feedback in the first portion of the virtual object.
  2. 2 . The method of claim 1 , wherein displaying, via the display generation component, the second visual feedback includes displaying the second visual feedback having a shape resulting from masking a first shape corresponding to the attention of the user with a second shape of the virtual object, and displaying, via the display generation component, the first visual feedback includes displaying the first visual feedback having a shape that does not result from masking the first shape with the second shape.
  3. 3 . The method of claim 1 , further comprising: while the one or more first criteria or the one or more second criteria are satisfied, detecting, via the one or more input devices, a change in position of the first portion of the user relative to the virtual object; and in response to detecting the change in position of the first portion of the user relative to the virtual object: in accordance with a determination that the one or more first criteria were satisfied, changing a visual appearance of the first visual feedback by a first amount; and in accordance with a determination that the one or more second criteria were satisfied, forgoing changing a visual appearance of the second visual feedback by the first amount.
  4. 4 . The method of claim 3 , further comprising: in response to detecting the change in position of the first portion of the user relative to the virtual object, and in accordance with the determination that the one or more second criteria were satisfied, forgoing changing the visual appearance of the second visual feedback based on a change in position of the first portion of the user relative to the virtual object.
  5. 5 . The method of claim 3 , wherein changing the visual appearance of the first visual feedback includes changing a level of brightness of the first visual feedback.
  6. 6 . The method of claim 3 , wherein changing the visual appearance of the first visual feedback includes changing a size of the first visual feedback.
  7. 7 . The method of claim 3 , wherein changing the visual appearance of the first visual feedback includes changing an amount of blur of the first visual feedback.
  8. 8 . The method of claim 1 , wherein the one or more second criteria include a criterion that is satisfied when the virtual object is a selectable object, and is not satisfied when the virtual object is not a selectable object.
  9. 9 . The method of claim 1 , wherein the one or more second criteria include a criterion that is satisfied when the first portion of the user is in a ready state.
  10. 10 . The method of claim 1 , further comprising: while displaying the second visual feedback in the first portion of the virtual object in response to detecting the first user input: detecting, via the one or more input devices, a second user input corresponding to an indirect input from the first portion of the user; and in response to detecting the second user input, interacting with the virtual object based on the second user input.
  11. 11 . The method of claim 1 , further comprising: while displaying the first visual feedback in the first portion of the virtual object in response to detecting the first user input: detecting, via the one or more input devices, a second user input corresponding to a direct input from the first portion of the user; and in response to detecting the second user input, interacting with the virtual object based on the second user input.
  12. 12 . The method of claim 1 , further comprising: while the one or more first criteria or the one or more second criteria are satisfied, detecting, via the one or more input devices, a second input corresponding to selection of the virtual object, wherein the virtual object is a selectable button; and in response to detecting the second input, selecting the selectable button, including: in accordance with a determination that the one or more first criteria were satisfied when the second input was detected, moving the virtual object away from a viewpoint of the user in accordance with the second input; and in accordance with a determination that the one or more second criteria were satisfied when the second input was detected, forgoing moving the virtual object away from the viewpoint of the user.
  13. 13 . The method of claim 1 , further comprising: while the one or more first criteria or the one or more second criteria are satisfied, detecting, via the one or more input devices, a second input corresponding to selection of the virtual object, wherein the virtual object is a selectable button; and in response to detecting the second input: selecting the selectable button; in accordance with a determination that the one or more first criteria were satisfied, displaying a visual indication around at least a portion of a perimeter of the virtual object indicating that the virtual object has been selected; and in accordance with a determination that the one or more second criteria were satisfied, forgoing displaying the visual indication around the portion of the perimeter of the virtual object.
  14. 14 . The method of claim 1 , further comprising: while displaying the second visual feedback at the first portion of the virtual object, detecting second user input that corresponds to movement of the first portion of the user toward the virtual object; and in response to detecting the second user input, and in accordance with a determination that first portion of the user has moved within a threshold distance of the virtual object, ceasing display of the second visual feedback and displaying the first visual feedback in a respective portion of the virtual object.
  15. 15 . The method of claim 1 , wherein: displaying the first visual feedback includes: in accordance with a determination that the virtual object has a first size, displaying the first visual feedback having a first visual appearance; and in accordance with a determination that the virtual object has a second size, displaying the first visual feedback having the first visual appearance; and displaying the second visual feedback includes: in accordance with a determination the virtual object has the first size, displaying the second visual feedback having a second visual appearance; and in accordance with a determination that the virtual object has the second size, displaying the second visual feedback having a third visual appearance different from the second visual appearance.
  16. 16 . The method of claim 1 , further comprising: while displaying, in a three-dimensional environment, a respective virtual object having a first size relative to the three-dimensional environment, detecting, via the one or more input devices, a second user input that includes attention directed towards a first portion of the respective virtual object; and in response to detecting the second user input, in accordance with a determination that one or more third criteria are satisfied, displaying the respective virtual object having a second size, greater than the first size, relative to the three-dimensional environment.
  17. 17 . The method of claim 16 , wherein the second user input includes gaze of the user directed to the respective virtual object without input from the user other than the gaze of the user, and the one or more third criteria include a criterion that is satisfied when the second user input includes gaze of the user directed toward the respective virtual object without input other than the gaze from the user.
  18. 18 . The method of claim 17 , wherein the one or more third criteria include a criterion that is satisfied when gaze of the user is directed toward the respective virtual object, and a criterion that is satisfied when the second user input includes the respective type of input from the first portion of the user at a location that corresponds to the respective virtual object, the method further comprising: while displaying the respective virtual object having the second size relative to the three-dimensional environment, detecting, via the one or more input devices, that the gaze of the user is no longer directed toward the respective virtual object and that the first portion of the user is no longer providing the respective type of input at the location corresponding to the respective virtual object; and in response to detecting that the gaze of the user is no longer directed toward the respective virtual object and that the first portion of the user is no longer providing the respective type of input at the location corresponding to the respective virtual object, displaying, in the three-dimensional environment, the respective virtual object having the first size relative to the three-dimensional environment.
  19. 19 . The method of claim 18 , further comprising: while displaying the respective virtual object having the second size relative to the three-dimensional environment, detecting, via the one or more input devices, that the first portion of the user is no longer providing the respective type of input at the location corresponding to the respective virtual object but that the gaze of the user is directed toward the respective virtual object; and in response to detecting that the first portion of the user is no longer providing the respective type of input at the location corresponding to the respective virtual object but that the gaze of the user is directed toward the respective virtual object, maintaining, in the three-dimensional environment, display of the respective virtual object having the second size relative to the three-dimensional environment.
  20. 20 . The method of claim 18 , further comprising: while displaying the respective virtual object having the second size relative to the three-dimensional environment, detecting, via the one or more input devices, that the gaze of the user is no longer directed toward the respective virtual object but that the first portion of the user is providing the respective type of input at the location corresponding to the respective virtual object; and in response to detecting that that the gaze of the user is no longer directed toward the respective virtual object but that the first portion of the user is providing the respective type of input at the location corresponding to the respective virtual object, maintaining, in the three-dimensional environment, display of the respective virtual object having the second size relative to the three-dimensional environment.

Description

CROSS REFERENCE TO RELATED APPLICATIONS This application claims the benefit of U.S. Provisional Application No. 63/377,024, filed Sep. 24, 2022, U.S. Provisional Application No. 63/503,138, filed May 18, 2023, U.S. Provisional Application No. 63/506,080, filed Jun. 3, 2023, and U.S. Provisional Application No. 63/506,124, filed Jun. 4, 2023, the contents of which are herein incorporated by reference in their entireties for all purposes. TECHNICAL FIELD This relates generally to computer systems that provide computer-generated experiences, including, but no limited to, electronic devices that provide virtual reality and mixed reality experiences via a display. BACKGROUND The development of computer systems for augmented reality has increased significantly in recent years. Example augmented reality environments include at least some virtual elements that replace or augment the physical world. Input devices, such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch-screen displays for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments. Example virtual elements include virtual objects, such as digital images, video, text, icons, and control elements such as buttons and other graphics. SUMMARY Some methods and interfaces for interacting with environments that include at least some virtual elements (e.g., applications, augmented reality environments, mixed reality environments, and virtual reality environments) are cumbersome, inefficient, and limited. For example, systems that provide insufficient feedback for performing actions associated with virtual objects, systems that require a series of inputs to achieve a desired outcome in an augmented reality environment, and systems in which manipulation of virtual objects are complex, tedious, and error-prone, create a significant cognitive burden on a user, and detract from the experience with the virtual/augmented reality environment. In addition, these methods take longer than necessary, thereby wasting energy of the computer system. This latter consideration is particularly important in battery-operated devices. Accordingly, there is a need for computer systems with improved methods and interfaces for providing computer-generated experiences to users that make interaction with the computer systems more efficient and intuitive for a user. Such methods and interfaces optionally complement or replace conventional methods for providing extended reality experiences to users. Such methods and interfaces reduce the number, extent, and/or nature of the inputs from a user by helping the user to understand the connection between provided inputs and device responses to the inputs, thereby creating a more efficient human-machine interface. The above deficiencies and other problems associated with user interfaces for computer systems are reduced or eliminated by the disclosed systems. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device, such as a watch, or a head-mounted device). In some embodiments, the computer system has a touchpad. In some embodiments, the computer system has one or more cameras. In some embodiments, the computer system has a touch-sensitive display (also known as a “touch screen” or “touch-screen display”). In some embodiments, the computer system has one or more eye-tracking components. In some embodiments, the computer system has one or more hand-tracking components. In some embodiments, the computer system has one or more output devices in addition to the display generation component, the output devices including one or more tactile output generators and/or one or more audio output devices. In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some embodiments, the user interacts with the GUI through a stylus and/or finger contacts and gestures on the touch-sensitive surface, movement of the user's eyes and hand in space relative to the GUI (and/or computer system) or the user's body as captured by cameras and other movement sensors, and/or voice inputs as captured by one or more audio input devices. In some embodiments, the functions performed through the interactions optionally include image editing, drawing, presenting, word processing, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographing, digital videoing, web browsing, digital music playing, note taking, and/or digital video playing. Executable instructions for performing these functions a