Search

EP-4736038-A1 - USING PERSONAL ATTRIBUTES TO UNIQUELY IDENTIFY INDIVIDUALS

EP4736038A1EP 4736038 A1EP4736038 A1EP 4736038A1EP-4736038-A1

Abstract

A method (400) includes receiving personal attribute data (110) characterizing one or more personal attributes of a particular person (202), obtaining an identity (206) of the particular person, extracting, from the personal attribute data, a reference vector (204) for the particular person, and storing, in an identifiable persons datastore (200), the reference vector for the particular person and the identity of the particular person. The method also includes receiving additional personal attribute data characterizing one or more personal attributes of an unrecognizable individual (104) and performing person identification on the additional personal attribute data to identify the unrecognizable individual by extracting an evaluation vector (312) for the unrecognizable individual, and based on determining that the evaluation vector matches the reference vector, identifying the unrecognizable individual as the particular person. The method also includes presenting an identification cue (120) that conveys the identity of the unrecognizable individual as the particular person.

Inventors

  • KLEIN, DANIEL V.
  • SEDOURAM, Ramprasad

Assignees

  • Google LLC

Dates

Publication Date
20260506
Application Date
20240724

Claims (20)

  1. 1. A computer-implemented method (400) executed on data processing hardware (510) that causes the data processing hardware (510) to perform operations comprising: for a particular person (202): receiving personal attribute data (110) characterizing one or more personal attributes of the particular person (202), the personal attribute data (110) captured by a device associated with a user (102) while the user (102) is interacting with the particular person (202); obtaining an identity (206) of the particular person (202); extracting, from the personal attribute data (110), a reference vector (204) for the particular person (202); and storing, in an identifiable persons datastore (200), the reference vector (204) for the particular person (202) and the identity (206) of the particular person (202); receiving additional personal attribute data (110) characterizing one or more personal attributes of an unrecognizable individual (104) who the user (102) is unable to recognize, the additional personal attribute data (110) obtained by the device while the user (102) is interacting with the unrecognizable individual (104); performing person identification on the additional personal attribute data (110) to identify the unrecognizable individual (104) by: extracting, from the additional personal attribute data (110), an evaluation vector (312) for the unrecognizable individual (104); determining that the evaluation vector (312) for the unrecognizable individual (104) matches the reference vector (204) stored in the identifiable persons datastore (200); and based on determining that the evaluation vector (312) for the unrecognizable individual (104) matches the reference vector (204) stored in the identifiable persons datastore (200), identifying the unrecognizable individual (104) as the particular person (202); and presenting, while the user (102) is interacting with the unrecognizable individual (104), an identification cue (120) to the user (102), the identification cue (120) conveying the identity (206) of the unrecognizable individual (104) as the particular person (202).
  2. 2. The computer-implemented method (400) of claim 1 , wherein the personal attribute data (110) comprises at least one of: audio data (110a) characterizing to an utterance spoken by the particular person (202), the audio data (110a) captured by an array of one or more microphones (16a) in communication with the device; or image data (110b) characterizing a face of the particular person (202), the image data (110b) captured by an image capture device (20) in communication with the device.
  3. 3. The computer-implemented method (400) of claim 1 or 2, wherein: receiving the personal attribute data (110) comprises receiving audio data (110a) characterizing an utterance spoken by the particular person (202) during the interaction the user (102) is having with the particular person (202); extracting the reference vector (204) for the particular person (202) comprises extracting, from the audio data (110a) characterizing the utterance spoken by the particular person (202), the reference vector (204) representing characteristics of a voice of the particular person (202); receiving the additional personal attribute data (110) comprises receiving additional audio data (110a) characterizing an utterance spoken by the unrecognizable individual (104) during the interaction the user (102) is having with the unrecognizable individual (104); and extracting the evaluation vector (312) for the unrecognizable individual (104) comprises extracting, from the additional audio data (110a) corresponding to the utterance spoken by the unrecognizable individual (104), the evaluation vector (312) representing characteristics of a voice of the unrecognizable individual (104).
  4. 4. The computer-implemented method (400) of any of claims 1-3, wherein: receiving the personal attribute data (110) comprises receiving image data (110b) corresponding to a face of the particular person (202) during the interaction the user (102) is having with the particular person (202); extracting the reference vector (204) for the particular person (202) comprises extracting, from the image data (110b) characterizing the face of the particular person (202), the reference vector (204) representing facial features of the particular person (202); receiving the additional personal attribute data (110) comprises receiving additional image data (110b) characterizing a face of the unrecognizable individual (104) during the interaction the user (102) is having with the unrecognizable individual (104); and extracting the evaluation vector (312) for the unrecognizable individual (104) comprises extracting, from the additional image data (110b) characterizing the face of the unrecognizable individual (104), the evaluation vector (312) representing facial features of the unrecognizable individual (104).
  5. 5. The computer-implemented method (400) of any of claims 1^4, wherein obtaining the identity (206) of the particular person (202) comprises receiving a user input from the user (102) that conveys the identity (206) of the particular person (202), the user (102) providing the user input after the user (102) finishes the interaction with the particular person (202).
  6. 6. The computer-implemented method (400) of any of claims 1-5, wherein obtaining the identity (206) of the particular person (202) comprises: receiving audio data (110a) characterizing an utterance spoken by the particular person (202) during the interaction the user (102) is having with the particular person (202); performing speech recognition on the audio data (110a) to obtain a transcription of the utterance spoken by the particular person (202); and processing the transcription of the utterance to ascertain the identity (206) of the particular person (202).
  7. 7. The computer-implemented method (400) of any of claims 1-6, wherein obtaining the identity (206) of the particular person (202) comprises receiving, from another computing device associated with the particular person (202), metadata indicating the identity (206) of the particular person (202) from the another computing device.
  8. 8. The computer-implemented method (400) of any of claims 1—7, wherein the operations further comprise: receiving a trigger input from the user (102); and performing the person identification on the additional personal attribute data (110) to identify the unrecognizable individual (104) in response to receiving the trigger input.
  9. 9. The computer-implemented method (400) of claim 8, wherein the trigger input comprises at least one of a voice-based command or a gesture captured by the device.
  10. 10. The computer-implemented method (400) of any of claims 1-9, wherein presenting the identification cue (120) to the user (102) comprises providing, for audible output from the device or from an audio output device (16b) in communication with the device, an audio message conveying the identity (206) of the unrecognizable individual (104).
  11. 11. The computer-implemented method (400) of any of claims 1-10, wherein presenting the identification cue (120) to the user (102) comprises providing, for display on a screen in communication with the device, a textual message conveying the identity (206) of the unrecognizable individual (104).
  12. 12. A system (100) comprising: data processing hardware (510); and memory hardware (520) in communication with the data processing hardware (510) and storing instructions that, when executed on the data processing hardware (510), cause the data processing hardware (510) to perform operations comprising: for a particular person (202): receiving personal attribute data (110) characterizing one or more personal attributes of the particular person (202), the personal attribute data (110) captured by a device associated with a user (102) while the user (102) is interacting with the particular person (202); obtaining an identity (206) of the particular person (202); extracting, from the personal attribute data (110), a reference vector (204) for the particular person (202); and storing, in an identifiable persons datastore (200), the reference vector (204) for the particular person (202) and the identity (206) of the particular person (202); receiving additional personal attribute data (110) characterizing one or more personal attributes of an unrecognizable individual (104) who the user (102) is unable to recognize, the additional personal attribute data (110) obtained by the device while the user (102) is interacting with the unrecognizable individual (104); performing person identification on the additional personal attribute data (110) to identify the unrecognizable individual (104) by: extracting, from the additional personal attribute data (110), an evaluation vector (312) for the unrecognizable individual (104); determining that the evaluation vector (312) for the unrecognizable individual (104) matches the reference vector (204) stored in the identifiable persons datastore (200); and based on determining that the evaluation vector (312) for the unrecognizable individual (104) matches the reference vector (204) stored in the identifiable persons datastore (200), identifying the unrecognizable individual (104) as the particular person (202); and presenting, while the user (102) is interacting with the unrecognizable individual (104), an identification cue (120) to the user (102), the identification cue (120) conveying the identity (206) of the unrecognizable individual (104) as the particular person (202).
  13. 13. The system (100) of claim 12, wherein the personal attribute data (110) comprises at least one of: audio data (110a) characterizing to an utterance spoken by the particular person (202), the audio data (110a) captured by an array of one or more microphones (16a) in communication with the device; or image data (110b) characterizing a face of the particular person (202), the image data (110b) captured by an image capture device (20) in communication with the device.
  14. 14. The system (100) of claim 12 or 13, wherein: receiving the personal attribute data (110) comprises receiving audio data (110a) characterizing an utterance spoken by the particular person (202) during the interaction the user (102) is having with the particular person (202); extracting the reference vector (204) for the particular person (202) comprises extracting, from the audio data (110a) characterizing the utterance spoken by the particular person (202), the reference vector (204) representing characteristics of a voice of the particular person (202); receiving the additional personal attribute data (110) comprises receiving additional audio data (110a) characterizing an utterance spoken by the unrecognizable individual (104) during the interaction the user (102) is having with the unrecognizable individual (104); and extracting the evaluation vector (312) for the unrecognizable individual (104) comprises extracting, from the additional audio data (110a) corresponding to the utterance spoken by the unrecognizable individual (104), the evaluation vector (312) representing characteristics of a voice of the unrecognizable individual (104).
  15. 15. The system (100) of any of claims 12-14, wherein: receiving the personal attribute data (110) comprises receiving image data (110b) corresponding to a face of the particular person (202) during the interaction the user (102) is having with the particular person (202); extracting the reference vector (204) for the particular person (202) comprises extracting, from the image data (110b) characterizing the face of the particular person (202), the reference vector (204) representing facial features of the particular person (202); receiving the additional personal attribute data (110) comprises receiving additional image data (110b) characterizing a face of the unrecognizable individual (104) during the interaction the user (102) is having with the unrecognizable individual (104); and extracting the evaluation vector (312) for the unrecognizable individual (104) comprises extracting, from the additional image data (110b) characterizing the face of the unrecognizable individual (104), the evaluation vector (312) representing facial features of the unrecognizable individual (104).
  16. 16. The system (100) of any of claims 12-15, wherein obtaining the identity (206) of the particular person (202) comprises receiving a user input from the user (102) that conveys the identity (206) of the particular person (202), the user (102) providing the user input after the user (102) finishes the interaction with the particular person (202).
  17. 17. The system (100) of any of claims 12-16, wherein obtaining the identity (206) of the particular person (202) comprises: receiving audio data (110a) characterizing an utterance spoken by the particular person (202) during the interaction the user (102) is having with the particular person (202); performing speech recognition on the audio data (110a) to obtain a transcription of the utterance spoken by the particular person (202); and processing the transcription of the utterance to ascertain the identity (206) of the particular person (202).
  18. 18. The system (100) of any of claims 12-17, wherein obtaining the identity (206) of the particular person (202) comprises receiving, from another computing device associated with the particular person (202), metadata indicating the identity (206) of the particular person (202) from the another computing device.
  19. 19. The system (100) of any of claims 12-18, wherein the operations farther comprise: receiving a trigger input from the user (102); and performing the person identification on the additional personal attribute data (110) to identify the unrecognizable individual (104) in response to receiving the trigger input.
  20. 20. The system (100) of claim 19, wherein the trigger input comprises at least one of a voice-based command or a gesture captured by the device.

Description

Using Personal Attributes to Uniquely Identify Individuals TECHNICAL FIELD [0001] This disclosure relates to using personal attributes to uniquely identify individuals. BACKGROUND [0002] An important aspect of human-to-human interactions is the ability of a person to identify an individual with which the person is interacting. By identifying the individual the person may better comprehend what the individual is communicating and/or communicate more effectively with the individual. A person may identify an individual using their auditory and/or visual faculties. SUMMARY [0003] One aspect of the disclosure provides a computer-implemented method executed on data processing hardware that causes the data processing hardware to perform operations including, for a particular person: receiving personal attribute data characterizing one or more personal attributes of the particular person, the personal attribute data captured by a device associated with a user while the user is interacting with the particular person; obtaining an identity of the particular person; extracting, from the personal attribute data, a reference vector for the particular person; and storing, in an identifiable persons datastore, the reference vector for the particular person and the identity of the particular person. The operations include receiving additional personal attribute data characterizing one or more personal attributes of an unrecognizable individual who the user is unable to recognize, the additional personal attribute data obtained by the device while the user is interacting with the unrecognizable individual. The operations include performing person identification on the additional personal attribute data to identify the unrecognizable individual by: extracting, from the additional personal attribute data, an evaluation vector for the unrecognizable individual; determining that the evaluation vector for the unrecognizable individual matches the reference vector stored in the identifiable persons datastore; and based on determining that the evaluation vector for the unrecognizable individual matches the reference vector stored in the identifiable persons datastore, identifying the unrecognizable individual as the particular person. The operations include presenting, while the user is interacting with the unrecognizable individual, an identification cue to the user, the identification cue conveying the identity of the unrecognizable individual as the particular person. [0004] Implementations of the disclosure may include one or more of the following optional features. In some implementations, the personal attribute data includes at least one of audio data characterizing to an utterance spoken by the particular person, the audio data captured by an array of one or more microphones in communication with the device, or image data characterizing a face or other identifiable aspect (e.g., a tattoo) of the particular person, the image data captured by an image capture device in communication with the device. In some examples, receiving the personal attribute data includes receiving audio data characterizing an utterance spoken by the particular person during the interaction the user is having with the particular person; extracting the reference vector for the particular person includes extracting, from the audio data characterizing the utterance spoken by the particular person, the reference vector representing characteristics of a voice of the particular person; receiving the additional personal attribute data includes receiving additional audio data characterizing an utterance spoken by the unrecognizable individual during the interaction the user is having with the unrecognizable individual; and extracting the evaluation vector for the unrecognizable individual includes extracting, from the additional audio data corresponding to the utterance spoken by the unrecognizable individual, the evaluation vector representing characteristics of a voice of the unrecognizable individual. In other examples, receiving the personal attribute data includes receiving image data corresponding to a face of the particular person during the interaction the user is having with the particular person; extracting the reference vector for the particular person includes extracting, from the image data characterizing the face of the particular person, the reference vector representing facial features of the particular person; receiving the additional personal attribute data includes receiving additional image data characterizing a face of the unrecognizable individual during the interaction the user is having with the unrecognizable individual; and extracting the evaluation vector for the unrecognizable individual includes extracting, from the additional image data characterizing the face of the unrecognizable individual, the evaluation vector representing facial features of the unrecognizable individual. [0005] In some implementations, obtaining the ide