Search

US-20260129397-A1 - SYSTEMS AND METHODS FOR DELIVERING PERSONALIZED AUDIO TO MULTIPLE USERS SIMULTANEOUSLY THROUGH SPEAKERS

US20260129397A1US 20260129397 A1US20260129397 A1US 20260129397A1US-20260129397-A1

Abstract

Systems and methods are provided herein for generating personalized audio settings for different users listening to the same piece of media content. For example, the system may receive a first audio setting for a first user corresponding to a first volume level for a first frequency and a second audio setting for a second user corresponding to a second volume level for the first frequency. The system may then use the first audio setting, second audio setting, position of the first user, and position of the second user to calculate a weight for each speaker of a plurality of speakers. Each speaker of the plurality of speakers then outputs the first frequency at the respective calculated weight, resulting in the first user perceiving the first frequency at the first volume level and the second user perceiving the first frequency at the second volume level.

Inventors

  • Ning Xu
  • Zhiyun Li

Assignees

  • ADEIA GUIDES INC.

Dates

Publication Date
20260507
Application Date
20251107

Claims (18)

  1. 1 . A method comprising: receiving, by a first device, a first audio profile associated with a first user, wherein the first audio profile comprises a first set of audio settings; receiving, by the first device, a second audio profile associated with a second user, wherein the second audio profile comprises a second set of audio settings; detecting a first position of the first user; detecting a second position of the second user; determining a first weight for a first frequency for a first speaker using the first position of the first user, the second position of the second user, the first set of audio settings, and the second set of audio settings; determining a second weight for the first frequency for a second speaker using the first position of the first user, the second position of the second user, the first set of audio settings, and the second set of audio settings; and outputting a piece of media content, wherein outputting the piece of media content comprises: outputting, by the first speaker, the first frequency at the first weight; and outputting, by the second speaker, the first frequency at the second weight.
  2. 2 . The method of claim 1 , wherein: the first audio profile comprises the first set of audio settings and a third set of audio settings; the first set of audio settings are associated with a first ear of the first user and the third set of audio settings are associated with a second ear of the first user; the second audio profile comprises the second set of audio settings and a fourth set of audio settings; the second set of audio settings are associated with a first ear of the second user and the fourth set of audio settings are associated with a second ear of the second user; the first position of the first user, the second position of the second user, the first set of audio settings, the second set of audio settings, the third set of audio settings, and the fourth set of audio settings are used to determine the first weight for the first frequency for the first speaker; and the first position of the first user, the second position of the second user, the first set of audio settings, the second set of audio settings, the third set of audio settings, and the fourth set of audio settings are used to determine the second weight for the first frequency for the second speaker.
  3. 3 . The method of claim 2 , wherein detecting the first position of the first user comprises: detecting a signal from a second device, wherein the second device is associated with the first user; and in response to detecting the signal from the second device, detecting the first position of the first user.
  4. 4 . The method of claim 2 , wherein detecting the first position of the first user comprises: receiving an input from a sensor; and in response to receiving the input from the sensor, detecting the first position of the first user.
  5. 5 . The method of claim 4 , wherein the sensor is a proximity sensor.
  6. 6 . The method of claim 2 , wherein the first audio profile comprises an audiogram.
  7. 7 . The method of claim 2 , wherein: the first set of audio settings comprise a first volume level for the first frequency; and the third set of audio settings comprise a second volume level for the first frequency.
  8. 8 . The method of claim 2 , further comprising detecting a signal from a second device, wherein: the signal comprises position information; and the first position of the first used is detected using the position information.
  9. 9 . The method of claim 2 , wherein outputting the piece of media content causes (i) the first user to hear the piece of media content according to the first set of audio settings and the second set of audio settings and (ii) the second user to hear the piece of media content according to the third set of audio settings and the fourth set of audio settings.
  10. 10 . An apparatus, comprising: control circuitry; and at least one memory including computer program code for one or more programs, the at least one memory and the computer program code configured to, with the control circuitry, cause the apparatus to perform at least the following: receive a first audio profile associated with a first user, wherein the first audio profile comprises a first set of audio settings; receive a second audio profile associated with a second user, wherein the second audio profile comprises a second set of audio settings; detect a first position of the first user; detect a second position of the second user; determine a first weight for a first frequency for a first speaker using the first position of the first user, the second position of the second user, the first set of audio settings, and the second set of audio settings; determine a second weight for the first frequency for a second speaker using the first position of the first user, the second position of the second user, the first set of audio settings, and the second set of audio settings; generate a first audio signal comprising the first frequency at the first weight; generate a second audio signal comprising the first frequency at the second weight; transmit the first audio signal to the first speaker; and transmit the second audio signal to the second speaker.
  11. 11 . The apparatus of claim 10 , wherein: the first audio profile comprises the first set of audio settings and a third set of audio settings; the first set of audio settings are associated with a first ear of the first user and the third set of audio settings are associated with a second ear of the first user; the second audio profile comprises the second set of audio settings and a fourth set of audio settings; the second set of audio settings are associated with a first ear of the second user and the fourth set of audio settings are associated with a second ear of the second user; the first position of the first user, the second position of the second user, the first set of audio settings, the second set of audio settings, the third set of audio settings, and the fourth set of audio settings are used to determine the first weight for the first frequency for the first speaker; and the first position of the first user, the second position of the second user, the first set of audio settings, the second set of audio settings, the third set of audio settings, and the fourth set of audio settings are used to determine the second weight for the first frequency for the second speaker.
  12. 12 . The apparatus of claim 11 , wherein the apparatus is further caused, when detecting the first position of the first user, to: detect a signal from a second device, wherein the second device is associated with the first user; and detect the first position of the first user in response to detecting the signal from the second device.
  13. 13 . The apparatus of claim 11 , wherein the apparatus is further caused, when detecting the first position of the first user, to: receive an input from a sensor; and detect the first position of the first user in response to receiving the input from the sensor.
  14. 14 . The apparatus of claim 13 , wherein the sensor is a proximity sensor.
  15. 15 . The apparatus of claim 11 , wherein the first audio profile comprises an audiogram.
  16. 16 . The apparatus of claim 11 , wherein: the first set of audio settings comprise a first volume level for the first frequency; and the third set of audio settings comprise a second volume level for the first frequency.
  17. 17 . The apparatus of claim 11 , wherein the apparatus is further caused to detect a signal from a device, wherein: the signal comprises position information; and the first position of the first used is detected using the position information.
  18. 18 . A non-transitory computer-readable medium having instructions encoded thereon that, when executed by control circuitry, cause the control circuitry to: receive a first audio profile associated with a first user, wherein the first audio profile comprises a first set of audio settings; receive a second audio profile associated with a second user, wherein the second audio profile comprises a second set of audio settings; detect a first position of the first user; detect a second position of the second user; determine a first weight for a first frequency for a first speaker using the first position of the first user, the second position of the second user, the first set of audio settings, and the second set of audio settings; determine a second weight for the first frequency for a second speaker using the first position of the first user, the second position of the second user, the first set of audio settings, and the second set of audio settings; generate a first audio signal comprising the first frequency at the first weight; generate a second audio signal comprising the first frequency at the second weight; transmit the first audio signal to the first speaker; and transmit the second audio signal to the second speaker.

Description

CROSS-REFERENCE TO RELATED APPLICATION This application is a continuation of U.S. patent application Ser. No. 18/200,433, filed May 22, 2023, the disclosure of which is hereby incorporated by reference herein in its entirety. BACKGROUND The present disclosure relates to the delivery of audio content, and in particular to techniques for delivering personalized audio content to multiple users. SUMMARY Many households have one or more media devices (e.g., televisions, laptops, desktops, tablets, smartphones, etc.) that use one or more speakers to output media content. Most media devices provide adjustable audio settings. For example, a user may be able to adjust the audio equalization settings so that certain frequencies are louder than other frequencies. Adjustable audio settings are particularly useful because different users may have different audio preferences. Some users may have unique audio preferences for each individual ear. In some cases, audio preferences may be based on the hearing capabilities of a user. For example, a first user may suffer from hearing loss and prefer audio settings with increased volume. Although some media devices provide adjustable audio settings, said media devices are limited to outputting audio according to a single set of audio settings. Accordingly, a media device is only able to output audio according to the audio preferences of a single user despite multiple users (who have their own unique audio preferences) consuming the same audio. This situation often leads to a poor user experience. For example, a first user may prefer a louder volume and a second user may prefer a quieter volume. If the media device uses the audio settings for the first user, then the audio may be unpleasantly loud for the second user. If the media device uses the audio setting for the second user, then the first user may be unable to hear the audio. In view of these deficiencies, there exists a need for improved systems and methods for generating personalize audio for different users consuming the same piece of media content. Accordingly, techniques are disclosed herein for providing personalized audio settings to different users listening to the same piece of media content. In an embodiment, given a set of speakers with known positions, a set of users with known positions and orientations, and known audio preferences on a per-frequency and per-ear basis for each of the users, the disclosed techniques enable a determination of output modulation, as a function of frequency, for each of the set of speakers that results in a desired perceived amplitude or volume for each of the particular frequencies for each of the users (or even for each ear of a single user). In an example implementation, a first device (e.g., a television) may uses a plurality of speakers to output audio. To personalize the outputted audio, the first device may receive a first audio profile associated with a first user and a second audio profile associated with a second user. The audio profiles may comprise one or more preferences. For example, the first audio profile may comprise a first frequency preference (e.g., perceived volume at a first level for a frequency) and the second audio profile may comprise a second frequency preference (e.g., perceived volume at a second level for the frequency). In some embodiments, allowing users to select frequency preferences provides an improved user experience when consuming media content. Different users (and even different ears of a single user) may be more or less sensitive to certain frequencies. For example, some users may struggle to hear certain frequencies at a low volume due to hearing impairments, old age, genetic differences, etc. Accordingly, these users can select preferences to increase the perceived volumes for frequencies that the users struggle to hear, allowing the users to more easily consume the piece of media content. The first device may receive the audio profiles from the users. For example, the first user and the second user may input their respective audio profiles into a user interface of the first device. In another example, the first device may receive the audio profiles from devices (e.g., smartphones) associated with the users. The audio profiles may comprise audio settings (e.g., volume preferences for one or more frequencies) associated with the corresponding user. In an embodiment, each set of audio settings corresponds to a different audiogram for a user. An audiogram may be developed on a per-user or a per-ear basis. An audiogram may be a graph indicating the softest sounds a person can hear at different frequencies. In an embodiment, a horizontal axis (x-axis) of an audiogram represents frequency (pitch) from lowest to highest. The lowest frequency tested may be 250 Hertz (Hz), and the highest frequency tested may be 8000 Hz, for example. In an embodiment, a vertical axis (y-axis) of the audiogram may represent the intensity (loudness) of sound in decib