US-12625546-B2 - User interfaces for gaze tracking enrollment
Abstract
Gaze enrollment, including displaying an enrollment progress user indicator, animating movement of user interface elements, changing the appearances of user interface elements, and/or moving a user interface element over time, enable a computer system to more accurately track the gaze of a user of the computer system.
Inventors
- Giancarlo Yerkes
- Amy E. Dedonato
- Adam L. AMADIO
- Kaely COON
- Stephen O. Lemay
- William A. Sorrentino, III
- Lynn I. STREJA
- Israel Pastrana Vicente
Assignees
- APPLE INC.
Dates
- Publication Date
- 20260512
- Application Date
- 20230921
Claims (20)
- 1 . A computer system configured to communicate with one or more display generation components and one or more input devices, comprising: one or more processors; and memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: displaying, via the one or more display generation components, a first gaze enrollment user interface, wherein the first gaze enrollment user interface includes a first plurality of gaze target elements, including a first gaze target element and a second gaze target element; while displaying the first gaze enrollment user interface, detecting, via the one or more input devices, a selection input; and in response to detecting the selection input: in accordance with a determination that a gaze of a user was directed toward the first gaze target element when the selection input was detected, outputting first feedback indicating that gaze enrollment information corresponding to the first gaze target element has been recorded; and in accordance with a determination that the gaze of the user was not directed toward the first gaze target element when the selection input was detected, forgoing outputting the first feedback.
- 2 . The computer system of claim 1 , wherein the selection input comprises an air gesture input.
- 3 . The computer system of claim 1 , wherein outputting the first feedback indicating that gaze enrollment information corresponding to the first gaze target element has been recorded includes displaying, via the one or more display generation components, first visual feedback indicating that the gaze enrollment information corresponding to the first gaze target element has been recorded.
- 4 . The computer system of claim 3 , wherein displaying the first visual feedback comprises displaying the first gaze target element changing from having a first visual appearance to having a second visual appearance different from the first visual appearance.
- 5 . The computer system of claim 4 , the one or more programs further including instructions for: subsequent to displaying the first visual feedback, maintaining display of the first gaze target element having the second visual appearance.
- 6 . The computer system of claim 3 , wherein displaying the first visual feedback comprises displaying, via the one or more display generation components, a new gaze target element that was not previously displayed prior to detecting the selection input.
- 7 . The computer system of claim 3 , wherein displaying the first visual feedback comprises displaying the second gaze target element changing in appearance from having a third visual appearance to having a fourth visual appearance different from the third visual appearance.
- 8 . The computer system of claim 7 , wherein displaying the second gaze target element changing in appearance comprises: in accordance with a determination that gaze enrollment information corresponding to the second gaze target element has been recorded, fading out the second gaze target element.
- 9 . The computer system of claim 1 , wherein outputting the first feedback indicating that gaze enrollment information corresponding to the first gaze target element has been recorded includes outputting first audio feedback indicating that the gaze enrollment information corresponding to the first gaze target element has been recorded.
- 10 . The computer system of claim 1 , the one or more programs further including instructions for: in response to detecting the selection input: in accordance with a determination that the gaze of the user was directed toward the second gaze target element when the selection input was detected, outputting second feedback indicating that gaze enrollment information corresponding to the second gaze target element has been recorded.
- 11 . The computer system of claim 1 , the one or more programs further including instructions for: while displaying the first gaze enrollment user interface, detecting, via the one or more input devices, a gaze input from the user; and in response to detecting the gaze input from the user: in accordance with a determination that the gaze input is directed toward the first gaze target element, outputting first gaze feedback indicating that the gaze of the user is directed toward the first gaze target element.
- 12 . The computer system of claim 11 , wherein outputting first gaze feedback indicating that the gaze of the user is directed toward the first gaze target element includes outputting audio feedback indicating that the gaze of the user is directed toward the first gaze target element.
- 13 . The computer system of claim 11 , wherein outputting first gaze feedback indicating that the gaze of the user is directed toward the first gaze target element includes displaying, via the one or more display generation components, first gaze input visual feedback indicating that the gaze of the user is directed toward the first gaze target element.
- 14 . The computer system of claim 13 , wherein: displaying the first gaze input visual feedback comprises displaying the first gaze target element reducing in size from a first size to a first reduced size that is smaller than the first size; and the one or more programs further include instructions for: while displaying the first gaze target element at the first reduced size, detecting, via the one or more input devices, a second selection input while the gaze of the user is directed toward the first gaze target element, wherein the second selection input includes a first portion and a second portion; in response to detecting the first portion of the second selection input, displaying, via the one or more display generation components, the first gaze target element reducing in size from the first reduced size to a second reduced size that is smaller than the first reduced size; and in response to detecting the second portion of the second selection input, displaying, via the one or more display generation components, the first gaze target element growing in size from the second reduced size to a second size that is larger than the second reduced size.
- 15 . The computer system of claim 13 , wherein displaying the first gaze input visual feedback comprises displaying a first portion of the first gaze target element reducing in size while maintaining the size of a second portion of the first gaze target element.
- 16 . The computer system of claim 1 , wherein displaying the first gaze enrollment user interface further comprises concurrently displaying the first plurality of gaze target elements, including the first gaze target element and the second gaze target element.
- 17 . The computer system of claim 16 , wherein the first plurality of gaze target elements includes: the first gaze target element for which gaze enrollment information corresponding to the first gaze target element has not yet been recorded; and the second gaze target element for which gaze enrollment information corresponding to the second gaze target element has been recorded.
- 18 . The computer system of claim 17 , wherein: the first gaze target element is displayed in a first manner indicative of gaze enrollment information corresponding to the first gaze target element not yet being recorded; and the second gaze target element is displayed in a second manner different from the first manner and indicative of gaze enrollment information corresponding to the second gaze target element having previously been recorded.
- 19 . The computer system of claim 16 , wherein the first plurality of gaze target elements are selectable in a plurality of different orders.
- 20 . The computer system of claim 1 , the one or more programs further including instructions for: while displaying the first gaze enrollment user interface: in accordance with a determination that greater than a threshold duration of time has elapsed without receiving a selection input corresponding to a gaze target element of the first plurality of gaze target elements, displaying, via the one or more display generation components, a prompt instructing the user to provide a selection input corresponding to a gaze target element of the first plurality of gaze target elements.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS This application claims priority to U.S. Provisional Patent Application Ser. No. 63/522,091, entitled “USER INTERFACES FOR GAZE TRACKING ENROLLMENT,” filed Jun. 20, 2023; U.S. Provisional Patent Application Ser. No. 63/470,943, entitled “USER INTERFACES FOR GAZE TRACKING ENROLLMENT,” filed Jun. 4, 2023; and U.S. Provisional Patent Application Ser. No. 63/409,051, entitled “USER INTERFACES FOR GAZE TRACKING ENROLLMENT,” filed Sep. 22, 2022. The entire contents of these applications are hereby incorporated by reference in their entirety. TECHNICAL FIELD The present disclosure relates generally to computer systems that are in communication with a display generation component and one or more input devices that provide computer-generated experiences, including, but not limited to, electronic devices that provide virtual reality and mixed reality experiences via a display. BACKGROUND The development of computer systems for augmented reality has increased significantly in recent years. Example augmented reality environments include at least some virtual elements that replace or augment the physical world. Input devices, such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch-screen displays for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments. Example virtual elements include virtual objects, such as digital images, video, text, icons, and control elements such as buttons and other graphics. SUMMARY Some methods and interfaces for gaze tracking enrollment are cumbersome, inefficient, and limited. For example, systems that provide insufficient feedback for performing actions associated with gaze tracking enrollment, systems that require a series of inputs to achieve a desired outcome in an augmented reality environment, and systems in which manipulation of virtual objects are complex, tedious, and error-prone, create a significant cognitive burden on a user, and detract from the experience with the virtual/augmented reality environment. In addition, these methods take longer than necessary, thereby wasting energy of the computer system. This latter consideration is particularly important in battery-operated devices. Accordingly, there is a need for computer systems with improved methods and interfaces for gaze tracking enrollment and for providing computer-generated experiences to users that make interaction with the computer systems more efficient and intuitive for a user. Such methods and interfaces optionally complement or replace conventional methods for enrolling a user's gaze and providing extended reality experiences to users. Such methods and interfaces reduce the number, extent, and/or nature of the inputs from a user by helping the user to understand the connection between provided inputs and device responses to the inputs, thereby creating a more efficient human-machine interface. The above deficiencies and other problems associated with user interfaces for computer systems are reduced or eliminated by the disclosed systems. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device, such as a watch, or a head-mounted device). In some embodiments, the computer system has a touchpad. In some embodiments, the computer system has one or more cameras. In some embodiments, the computer system has a touch-sensitive display (also known as a “touch screen” or “touch-screen display”). In some embodiments, the computer system has one or more eye-tracking components. In some embodiments, the computer system has one or more hand-tracking components. In some embodiments, the computer system has one or more output devices in addition to the display generation component, the output devices including one or more tactile output generators and/or one or more audio output devices. In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some embodiments, the user interacts with the GUI through a stylus and/or finger contacts and gestures on the touch-sensitive surface, movement of the user's eyes and hand in space relative to the GUI (and/or computer system) or the user's body as captured by cameras and other movement sensors, and/or voice inputs as captured by one or more audio input devices. In some embodiments, the functions performed through the interactions optionally include image editing, drawing, presenting, word processing, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographi