Search

US-12625596-B1 - Voice communication targeting user interface

US12625596B1US 12625596 B1US12625596 B1US 12625596B1US-12625596-B1

Abstract

User interfaces may enable users to initiate voice-communications with voice-controlled devices via a Wi-Fi network or other network via an Internet Protocol (IP) address. The user interfaces may include controls to enable users to initiate voice communications, such as Voice over Internet Protocol (VOIP) calls, with devices that do not have connectivity with traditional mobile telephone networks, such as traditional circuit transmissions of a Public Switched Telephone Network (PSTN). For example, the user interface may enable initiating a voice communication with a voice-controlled device that includes network connectivity via a home Wi-Fi network. The user interfaces may indicate availability of devices and/or contacts for voice communications and/or recent activity of devices or contact.

Inventors

  • Blair Harold Beebe
  • Katherine Ann Baker
  • David Michael Rowell
  • Peter Chin

Assignees

  • AMAZON TECHNOLOGIES, INC.

Dates

Publication Date
20260512
Application Date
20231011

Claims (20)

  1. 1 . A method comprising: generating a user interface including representations of at least one of user profiles or devices, wherein the user interface is configured to receive user input data to customize which of the at least one of the user profiles or the devices will receive audio data representing a voice message; causing the user interface to display an activity indicator in association with the representations; receiving, utilizing the user interface, the user input data indicating a first device to receive the audio data; receiving the audio data representing the voice message; and causing the audio data to be output on the first device based at least in part on the user input data.
  2. 2 . The method of claim 1 , wherein the representations of the at least one of the user profiles or the devices include a location indicator assigned to the at least one of the user profiles or the devices by a speech processing system associated with the at least one of the user profiles or the devices.
  3. 3 . The method of claim 1 , wherein the activity indicator includes at least one of: a first indication that audio is currently being output; a second indication that output of the audio has been paused; or a third indication that audio output is unassociated with the devices.
  4. 4 . The method of claim 1 , further comprising causing the representations to include: a first indication that the first device is available for receiving the audio data representing the voice message; and a second indication that a second device of the devices is not available for receiving the audio data representing the voice message.
  5. 5 . The method of claim 1 , wherein the representations associated with the devices include an icon indicating a device type of individual ones of the devices, the device type indicating that the devices are configured to receive the audio data representing the voice message.
  6. 6 . The method of claim 1 , wherein the user input data indicates a second device to receive the audio data, and the method further comprises causing the audio data to be output on the first device and the second device concurrently based at least in part on the user input data.
  7. 7 . The method of claim 6 , further comprising: determining that the second device is unable to receive the voice message; based at least in part on the second device being unable to receive the voice message, causing the user interface to display an option to enable receipt of the voice message at the second device; and based at least in part on receiving selection of the option, enabling receipt of the voice message at the second device.
  8. 8 . The method of claim 1 , wherein: individual ones of the representations of the devices represent a communal device associated with account data; and the user input data is received from a second device associated with the account data.
  9. 9 . The method of claim 1 , wherein: the first device is a communal device; and the user input data is received from a mobile device associated with the communal device.
  10. 10 . The method of claim 1 , wherein selection of a representation of a user profile of the user profiles enables the user profile to send voice messages to the devices when a second device associated with the user profile is located in a different location from the devices.
  11. 11 . A system comprising: one or more processors; and non-transitory computer-readable media storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: generating a user interface including representations of at least one of user profiles or devices, wherein the user interface is configured to receive user input data to customize which of the at least one of the user profiles or the devices will receive audio data representing a voice message; causing the user interface to display an activity indicator in association with the representations; receiving, utilizing the user interface, the user input data indicating a first device to receive the audio data; receiving the audio data representing the voice message; and causing the audio data to be output on the first device based at least in part on the user input data.
  12. 12 . The system of claim 11 , wherein the representations of the at least one of the user profiles or the devices include a location indicator assigned to the at least one of the user profiles or the devices by a speech processing system associated with the at least one of the user profiles or the devices.
  13. 13 . The system of claim 11 , wherein the activity indicator includes at least one of: a first indication that audio is currently being output; a second indication that output of the audio has been paused; or a third indication that audio output is unassociated with the devices.
  14. 14 . The system of claim 11 , the operations further comprising causing the representations to include: a first indication that the first device is available for receiving the audio data representing the voice message; and a second indication that a second device of the devices is not available for receiving the audio data representing the voice message.
  15. 15 . The system of claim 11 , wherein the representations associated with the devices include an icon indicating a device type of individual ones of the devices, the device type indicating that the devices are configured to receive the audio data representing the voice message.
  16. 16 . The system of claim 11 , wherein the user input data indicates a second device to receive the audio data, and the operations further comprise causing the audio data to be output on the first device and the second device concurrently based at least in part on the user input data.
  17. 17 . The system of claim 16 , the operations further comprising: determining that the second device is unable to receive the voice message; based at least in part on the second device being unable to receive the voice message, causing the user interface to display an option to enable receipt of the voice message at the second device; and based at least in part on receiving selection of the option, enabling receipt of the voice message at the second device.
  18. 18 . The system of claim 11 , wherein: each of the representations of the devices represent a communal device associated with account data; and the user input data is received from a second device associated with the account data.
  19. 19 . The system of claim 11 , wherein: the first device is a communal device; and the user input data is received from a mobile device associated with the communal device.
  20. 20 . The system of claim 11 , wherein selection of a representation of a user profile of the user profiles enables the user profile to send voice messages to the devices when a second device associated with the user profile is located in a different location from the devices.

Description

CROSS REFERENCE TO RELATED APPLICATION This patent application is a continuation of and claims priority to U.S. patent Ser. No. 17/549,415, filed Dec. 13, 2021, which is a continuation of and claims priority to U.S. patent Ser. No. 16/797,592, filed Feb. 21, 2020, now known as U.S. Pat. No. 11,204,685, which issued on Dec. 21, 2021, which is a continuation of and claims priority to U.S. patent application Ser. No. 15/632,279, filed Jun. 23, 2017, now known as U.S. Pat. No. 10,572,107, which issued on Feb. 25, 2020, the entire contents of which are incorporated herein by reference. BACKGROUND Voice-controlled devices have gained popularity based in part on improvements in automated speech recognition and natural language understanding algorithms and the convenience provided by these devices which allow hands-free operation. During operation, a voice-controlled device receives, via one or more microphone, spoken commands from a user. The voice-controlled device then processes the spoken commands locally, with remote processing assistance, or both, and generates a response or performs an action. The voice-controlled device may then provide the response, such as by outputting audible sound via a speaker or by performing the action (e.g., playing music, etc.). Some voice-controlled devices may be stand-alone devices with few or no physical controls and possibly without a display, unlike standard mobile telephones. Like many mobile telephones, voice-controlled devices may execute applications to enable execution of various tasks, such as responding to questions, playing music, ordering products, and/or performing other tasks. The voice-controlled device may share information with other devices, which may or may not be voice-controlled, via a Wi-Fi network. BRIEF DESCRIPTION OF THE DRAWINGS The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The same reference numbers in different figures indicate similar or identical items. FIG. 1 is a schematic diagram of an illustrative environment to provide voice communications with at least some voice-controlled devices. FIG. 2 is a block diagram of an illustrative computer architecture to provide one or more voice communication targeting user interface. FIG. 3 is a schematic diagram of illustrative user interfaces depicting access from a contact card to representations of devices configured for voice communication. FIG. 4 is a schematic diagram of illustrative user interfaces depicting access from a contact card to a voice communication. FIG. 5 is a schematic diagram of illustrative user interfaces depicting access from a conversation interface to a specific available voice communication, where the access may indicate recent activity associated with a corresponding device configured to receive a voice communication. FIG. 6 is a schematic diagram of illustrative user interfaces depicting access from a conversation interface to a specific available voice communication of a contact that has granted permission for voice communications, where the access may indicate recent activity associated with a corresponding device configured to receive a voice communication. FIG. 7 is a schematic diagram of illustrative user interfaces depicting access from a conversation interface to representations of devices that are available for voice communications. FIG. 8 is a schematic diagram of illustrative user interfaces depicting illustrative access from a conversation interface to representations of devices that are available for voice communications, where at least some representations indicate recent activity associated with a corresponding device configured to receive a voice communication. FIG. 9 is a schematic diagram of illustrative user interfaces depicting access from representations of devices that are available for voice communications to a voice communication. FIG. 10 is a schematic diagram of illustrative user interfaces depicting access from representations of devices that are available for voice communications to a voice communication, where at least some representations indicate recent activity associated with a corresponding device configured to receive a voice communication. FIG. 11 is a flow diagram of an illustrative process to generate a user interface with representations of devices configured for voice communications. FIG. 12 is a flow diagram of an illustrative process to share recent activity information associated with a device configured for voice communications. DETAILED DESCRIPTION This disclosure is directed to generating user interfaces to enable users to initiate voice-communications with voice-controlled devices via a Wi-Fi network or other network via an Internet Protocol (IP) address. The user interfaces may be provided on and/or generated by devices that include a display, such as mobile telephones, tab