Search

US-12620179-B2 - Digital assistant object placement

US12620179B2US 12620179 B2US12620179 B2US 12620179B2US-12620179-B2

Abstract

Systems and processes for operating an intelligent automated assistant within a computer-generated reality (CGR) environment are provided. For example, a user input invoking a digital assistant session is received, and in response, a digital assistant session is initiated. Initiating the digital assistant session includes positioning a digital assistant object at a first location within the CGR environment but outside of the currently-displayed portion of the CGR environment at a first time, and providing a first output indicating the location of the digital assistant object.

Inventors

  • Brad K. HERMAN
  • William A. Sorrentino, III
  • Jose Antonio CHECA OLORIZ
  • Lynn I. STREJA
  • Garrett L. Weinberg
  • Isar ARASON
  • Pedro MARI
  • Shiraz AKMAL
  • Stephen O. Lemay
  • James J. Owen
  • Miquel ESTANY RODRIGUEZ
  • Jay MOON

Assignees

  • APPLE INC.

Dates

Publication Date
20260505
Application Date
20240206

Claims (20)

  1. 1 . An electronic device, comprising: a display; one or more sensors; one or more processors; a memory; and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: while displaying a portion of a computer-generated reality (CGR) environment representing a current field of view of a user of the electronic device: detecting, with the one or more sensors, a first user input; in accordance with a determination that the first user input satisfies at least one criterion for initiating a digital assistant session, initiating a first digital assistant session, wherein initiating the first digital assistant session includes positioning a digital assistant object at a first location within the CGR environment and outside of the displayed portion of the CGR environment at a first time; and providing a first output indicating the first location of the digital assistant object within the CGR environment.
  2. 2 . The electronic device of claim 1 , wherein the first user input includes an audio input.
  3. 3 . The electronic device of claim 1 , wherein the first user input includes a gaze input.
  4. 4 . The electronic device of claim 1 , wherein the first user input includes a gesture input.
  5. 5 . The electronic device of claim 1 , the one or more programs further including instructions for: determining the first location within the CGR environment based on one or more environmental factors.
  6. 6 . The electronic device of claim 1 , wherein providing the first output indicating the first location includes causing a first audio output to be produced.
  7. 7 . The electronic device of claim 1 , wherein providing the first output indicating the first location includes causing a first haptic output to be produced.
  8. 8 . The electronic device of claim 1 , wherein providing the first output indicating the first location includes displaying, on the display, a visual indication of the first location.
  9. 9 . The electronic device of claim 1 , the one or more programs further including instructions for: at a second time: in accordance with a determination that the first location is within the displayed portion of the CGR environment at the second time, displaying the digital assistant object at the first location.
  10. 10 . The electronic device of claim 1 , the one or more programs further including instructions for: detecting, with the one or more sensors, a second user input at a third time; determining an intent of the second user input; and providing a second output based on the determined intent.
  11. 11 . The electronic device of claim 1 , the one or more programs further including instructions for: dismissing the digital assistant object; and providing a fourth output indicating a dismissal of the digital assistant object.
  12. 12 . The electronic device of claim 1 , the one or more programs further including instructions for: providing a fifth output selected from two or more different outputs indicating a state selected from two or more different states of the first digital assistant session.
  13. 13 . A non-transitory computer-readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device with a display and one or more sensors, cause the electronic device to: while displaying a portion of a computer-generated reality (CGR) environment representing a current field of view of a user of the electronic device: detecting, with the one or more sensors, a first user input; in accordance with a determination that the first user input satisfies at least one criterion for initiating a digital assistant session, initiating a first digital assistant session, wherein initiating the first digital assistant session includes positioning a digital assistant object at a first location within the CGR environment and outside of the displayed portion of the CGR environment at a first time; and providing a first output indicating the first location of the digital assistant object within the CGR environment.
  14. 14 . The non-transitory computer-readable storage medium of claim 13 , wherein the first user input includes an audio input.
  15. 15 . The non-transitory computer-readable storage medium of claim 13 , wherein the first user input includes a gaze input.
  16. 16 . The non-transitory computer-readable storage medium of claim 13 , wherein the first user input includes a gesture input.
  17. 17 . The non-transitory computer-readable storage medium of claim 13 , the one or more programs further including instructions for: determining the first location within the CGR environment based on one or more environmental factors.
  18. 18 . The non-transitory computer-readable storage medium of claim 13 , wherein providing the first output indicating the first location includes causing a first audio output to be produced.
  19. 19 . The non-transitory computer-readable storage medium of claim 13 , wherein providing the first output indicating the first location includes causing a first haptic output to be produced.
  20. 20 . The non-transitory computer-readable storage medium of claim 13 , wherein providing the first output indicating the first location includes displaying, on the display, a visual indication of the first location.

Description

CROSS-REFERENCE TO RELATED APPLICATIONS This application is a continuation of PCT Patent Application Serial No. PCT/US2022/040346, entitled “DIGITAL ASSISTANT OBJECT PLACEMENT,” filed on Aug. 15, 2022, which claims priority to U.S. Patent Application Ser. No. 63/247,557, entitled “DIGITAL ASSISTANT OBJECT PLACEMENT,” filed on Sep. 23, 2021; and claims priority to U.S. Patent Application Ser. No. 63/235,424, entitled “DIGITAL ASSISTANT OBJECT PLACEMENT,” filed on Aug. 20, 2021. The contents of each of these applications are incorporated herein by reference in their entirety. FIELD This relates generally to digital assistants and, more specifically, to placing an object representing a digital assistant in a computer-generated reality (CGR) environment. BACKGROUND Digital assistants can act as a beneficial interface between human users and their electronic devices, for instance, using spoken or typed natural language, gestures, or other convenient or intuitive input modes. For example, a user can utter a natural-language request to a digital assistant of an electronic device. The digital assistant can interpret the user's intent from the speech input and operationalize the user's intent into tasks. The tasks can then be performed by executing one or more services of the electronic device, and a relevant output responsive to the user request can be returned to the user. Unlike the physical world, which a person can interact with and perceive without the use of an electronic device, an electronic device is used to interact with and/or perceive computer-generated reality (CGR) environment that is wholly or partially simulated. The CGR environment can include mixed reality (MR) content, augmented reality (AR) content, virtual reality (VR) content, and/or the like. One way to interact with a CGR system is by tracking some of a person's physical motions and, in response, adjusting characteristics of elements simulated in the CGR environment in a manner that seems to comply with at least one law of physics. For example, as a user moves the device presenting the CGR environment and/or the user's head, the CGR system can detect the movement and adjust the graphical content according to the user's point of view and the auditory content to create the effect of spatial sound. In some situations, the CGR system can adjust characteristics of the CGR content in response to user inputs, such as button inputs or vocal commands. Many different electronic devices and/or systems can be used to interact with and/or perceive the CGR environment, such as heads-up displays (HUDs), head mountable systems, projection-based systems, headphones/earphones, speaker arrays, smartphones, tablets, and desktop/laptop computers. For example, a head mountable system may include one or more speakers (e.g., a speaker array); an integrated or external opaque, translucent, or transparent display; image sensors to capture video of the physical environment; and/or microphones to capture audio of the physical environment. The display may be implemented using a variety of display technologies, including uLEDs, OLEDs, LEDs, liquid crystal on silicon, laser scanning light source, digital light projection, and so forth, and may implement an optical waveguide, optical reflector, hologram medium, optical combiner, combinations thereof, or similar technologies as a medium through which light is directed to a user's eyes. In implementations with transparent or translucent displays, the transparent or translucent display may also be controlled to become opaque. The display may implement a projection-based system to that projects images onto users' retinas and/or project virtual CGR elements into the physical environment (e.g., as a hologram, or projection mapped onto a physical surface or object). An electronic device may be used to implement the use of a digital assistant in a CGR environment. Implementing a digital assistant in a CGR environment may help a user of the electronic device to interact with the CGR environment, and may allow the user to access digital assistant functionality without needing to cease interaction with the CGR environment. However, as the interface of a CGR environment may be large and complex (e.g., a CGR environment may fill and extend beyond a user's field of view), invoking and interacting with a digital assistant within the CGR environment can be difficult, confusing, or distracting from the immersion of the CGR environment. SUMMARY Example methods are disclosed herein. An example method includes, at an electronic device having one or more processors, memory, a display, and one or more sensors: while displaying a portion of a computer-generated reality (CGR) environment representing a current field of view of a user of the electronic device: detecting, with the one or more sensors, a first user input; in accordance with a determination that the first user input satisfies at least one criterion for initiating a digital assistant