CN-122018689-A - Apparatus, method and graphical user interface for content application
Abstract
The present disclosure relates to devices, methods, and graphical user interfaces for content applications. In some implementations, the computer system generates a virtual lighting effect when presenting the content item. In some implementations, the computer system generates an animated three-dimensional object when rendering the content item. In some embodiments, the computer system displays a reduced user interface in place of the expanded user interface in response to a different input.
Inventors
- A. Rockwell
- P.D. Anton
- I. Alola
- M. A. Franco
- S. Harry Kumar
- T.C. DAHL
- D. Ruan can
- M. Stauber
- A. B. Vergo Wagner
- S.O. Lemme
- W. A. Solentino III
Assignees
- 苹果公司
Dates
- Publication Date
- 20260512
- Application Date
- 20240531
- Priority Date
- 20230603
Claims (18)
- 1. A method, the method comprising: at a computer system in communication with a display generation component and one or more input devices: displaying, via the display generation component, an extended user interface of an application at a first location in a three-dimensional environment, wherein: The application is controlling playback of a first content item at the computer system, The extended user interface includes a first selectable user interface object that can be selected to initiate playback of a second content item at the computer system that is different from the first content item, and The extended user interface is displayed simultaneously with a second selectable user interface object, the second selectable user interface object being displayed at a second location in the three-dimensional environment separate from the extended user interface; When the extended user interface of the application is displayed at the first location in the three-dimensional environment and the second selectable user interface object is displayed at the second location in the three-dimensional environment, receiving a first input corresponding to selection of the second selectable user interface object via the one or more input devices, and In response to receiving the first input: Displaying a reduced user interface of the application for controlling the playback of the first content item at the computer system in the three-dimensional environment, wherein the reduced user interface is displayed at a third location in the three-dimensional environment that is different from the first location and the second location, and Stopping displaying the extended user interface and the second selectable user interface object in the three-dimensional environment.
- 2. The method of claim 1, the method further comprising: a playback control user interface of the application separate from the extended user interface is displayed simultaneously with the extended user interface in the three-dimensional environment via the display generation component, the playback control user interface for controlling playback of the first content item.
- 3. The method of claim 2, wherein the playback control user interface comprises an image corresponding to the first content item and a third selectable user interface object for modifying playback of the first content item.
- 4. A method according to claim 3, wherein the second selectable user interface object comprises the image corresponding to the first content item.
- 5. The method of claim 1, the method further comprising: receiving a second input via the one or more input devices while the reduced user interface of the application is displayed, the second input corresponding to a request to display a second user interface in the three-dimensional environment corresponding to a second application different from the application, and Responsive to receiving the second input corresponding to displaying the second user interface: The reduced user interface of the application and the second user interface corresponding to the second application are displayed simultaneously in the three-dimensional environment.
- 6. The method of claim 5, the method further comprising: Receiving, via the one or more input devices, a third input corresponding to a request to display a third user interface corresponding to a third application different from the application and the second application while the second user interface corresponding to the second application is displayed, and in response to receiving the third input corresponding to displaying the third user interface: stopping the display of the second user interface and displaying the third user interface.
- 7. The method of claim 5, wherein the reduced user interface includes a third selectable user interface object that can be selected to redisplay the expanded user interface, and the method further comprises: Receiving a fourth input corresponding to a selection of the third selectable user interface object via the one or more input devices while the third selectable user interface object displayed simultaneously with the reduced user interface is displayed, and Responsive to receiving the fourth input corresponding to the selection of the third selectable user interface object: displaying the extended user interface and stopping display of the second user interface corresponding to the second application.
- 8. The method of claim 1, the method further comprising: receiving, via the one or more input devices, a second input corresponding to a request to resize the reduced user interface while the reduced user interface is displayed, and In response to receiving the second input corresponding to the request to resize the reduced user interface, updating a size of the reduced user interface in the three-dimensional environment in accordance with the second input corresponding to the request to resize the reduced user interface.
- 9. The method of claim 2, wherein the condensed user interface includes a third selectable user interface object selectable to display a representation of lyrics of the first content item, and the playback control user interface includes a fourth selectable user interface object selectable to display a representation of lyrics of the first content item.
- 10. The method of claim 9, the method further comprising: Receiving a second input corresponding to selection of the third selectable user interface object via the one or more input devices while the reduced user interface including the third selectable user interface object is displayed, and In response to receiving the second input, displaying the representation of the lyrics of the first content item in the three-dimensional environment concurrently with the reduced user interface, wherein the representation of the lyrics is displayed at a fourth location in the three-dimensional environment and the reduced user interface is displayed at a fifth location in the three-dimensional environment that is different from the fourth location.
- 11. The method of claim 9, the method further comprising: Receiving, via the one or more input devices, a second input corresponding to a request to move the reduced user interface while the representation of the lyrics is displayed at a fourth location in the three-dimensional environment and the reduced user interface is located at a fifth location in the three-dimensional environment, and In response to receiving the second input: moving the reduced user interface to a sixth location in the three-dimensional environment based on the second input, and The representation of the lyrics of the first content item is maintained at the fourth location in the three-dimensional environment.
- 12. The method of claim 9, the method further comprising: Receiving, via the one or more input devices, a second input corresponding to a request to move the representation of the lyrics of the first content item when the representation of the lyrics is displayed at a fourth location in the three-dimensional environment and the reduced user interface is located at a fifth location in the three-dimensional environment, and Responsive to receiving the second input corresponding to moving the representation of the lyrics of the first content item: Moving said representation of said lyrics of said first content item to a sixth position, and Continuing to display the reduced user interface at the fifth location.
- 13. The method of claim 9, the method further comprising: Receiving, via the one or more input devices, a second input corresponding to a request to modify a size of the representation of the lyrics of the first content item while the representation of the lyrics is displayed at a first size in the three-dimensional environment, and In response to receiving the second input: modifying the representation of the lyrics of the first content item to have a second size different from the first size in the three-dimensional environment according to the second input, and Maintaining the dimensions of the reduced user interface in the three-dimensional environment.
- 14. The method of claim 1, the method further comprising: Upon displaying the extended user interface and a third selectable user interface object selectable to display a representation of lyrics of the first content item, wherein the extended user interface is displayed at the first location in the three-dimensional environment, receiving a second input corresponding to selection of the third selectable user interface object via the one or more input devices, and in response to receiving the second input: stopping display of the extended user interface in the three-dimensional environment at the first location, and The representation of the lyrics of the first content item is displayed at the first location in the three-dimensional environment.
- 15. The method of claim 14, wherein displaying the representation of the lyrics of the first content item at the first location comprises displaying the representation of the lyrics of the first content item in a size different from a size of the expanded user interface displayed when the second input is detected.
- 16. The method of claim 14, wherein when the second input is detected, the expanded user interface is displayed concurrently with a playback control user interface comprising the second selectable user interface object, the method further comprising: in response to receiving the second input, displaying the representation of the lyrics of the first content item concurrently with the playback control user interface; Receiving, via the one or more input devices, a third input corresponding to a request to move the playback control user interface in the three-dimensional environment while the representation of the lyrics of the first content item is displayed concurrently with the playback control user interface, and In response to receiving the third input: Moving the playback control user interface in the three-dimensional environment according to the third input, and The representation of the lyrics of the first content item is moved in the three-dimensional environment according to the third input.
- 17. A computer system in communication with a display generation component and one or more input devices, the computer system comprising: one or more processors; Memory, and One or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-16.
- 18. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a computer system, in communication with a display generation component and one or more input devices, cause the computer system to perform any of the methods of claims 1-16.
Description
Apparatus, method and graphical user interface for content application The present application is a divisional application of patent application with application number 202480036952.0, application day 2024, 5, 31, and title of "apparatus, method and graphical user interface for content application". Cross Reference to Related Applications The present application claims the benefit of U.S. provisional application No. 63/506,072, filed on 3, 6, 2023, the contents of which are hereby incorporated by reference in their entirety for all purposes. Technical Field The present disclosure relates generally to computer systems that provide a computer-generated experience, including but not limited to electronic devices that provide a user interface for presenting and browsing content via a display. Background In recent years, the development of computer systems for augmented reality has increased significantly. An example augmented reality environment includes at least some virtual elements that replace or augment the physical world. Input devices (such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch screen displays) for computer systems and other electronic computing devices are used to interact with the virtual/augmented reality environment. Example virtual elements include virtual objects such as digital images, videos, text, icons, and control elements (such as buttons and other graphics). Disclosure of Invention Some methods and interfaces for interacting with environments (e.g., applications, augmented reality environments, mixed reality environments, and virtual reality environments) that include at least some virtual elements are cumbersome, inefficient, and limited. For example, providing a system for insufficient feedback of actions associated with virtual objects, a system that requires a series of inputs to achieve desired results in an augmented reality environment, and a system in which virtual objects are complex, cumbersome, and error-prone to manipulate can create a significant cognitive burden on the user and detract from the experience of the virtual/augmented reality environment. In addition, these methods take longer than necessary, wasting energy from the computer system. This latter consideration is particularly important in battery-powered devices. Accordingly, there is a need for a computer system with improved methods and interfaces to provide a user with a computer-generated experience, thereby making user interactions with the computer system more efficient and intuitive for the user. Such methods and interfaces optionally complement or replace conventional methods for providing an augmented reality experience to a user. Such methods and interfaces reduce the number, extent, and/or nature of inputs from a user by helping the user understand the association between the inputs provided and the response of the device to those inputs, thereby forming a more efficient human-machine interface. The above-described drawbacks and other problems associated with user interfaces of computer systems are reduced or eliminated by the disclosed systems. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is a portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device such as a watch or a head-mounted device). In some embodiments, the computer system has a touch pad. In some embodiments, the computer system has one or more cameras. In some embodiments, the computer system has (e.g., includes or communicates with) a display generation component (e.g., a display device such as a head-mounted device (HMD), a display, a projector, a touch-sensitive display (also referred to as a "touch screen" or "touch screen display"), or other device or component that presents visual content to a user, such as visual content that is generated and otherwise visible on or in the display generation component itself or from the display generation component. In some embodiments, the computer system has a Graphical User Interface (GUI), one or more processors, memory, and one or more modules, programs or sets of instructions stored in the memory for performing a plurality of functions, in some embodiments, the user interacts with the GUI through contact and gestures of a stylus and/or finger on a touch-sensitive surface, movement of the user's eyes and hands in space relative to the GUI (and/or computer system) or the user's body (as captured by a camera and other movement sensors), and/or voice input (as captured by one or more audio input devices), in some embodiments, the functions performed through interaction optionally include image editing, drawing, presentation, word processing, spreadsheet making, game playing, phone calls, video conferencing, email sending and receiving, instant messaging