US-12627658-B2 - Method and apparatus for creating encoded data and use of same for identity verification
Abstract
A method and system for establishing an association between a document and a person or verifying identity comprising capturing images of the person's face and processing the images to generate a face feature vector data. The face feature vector data cannot be reverse processed to generate the image of the person's face. The method also captures images of the graphic on the item and processes those images to extract graphic derived feature vector data that is stored inside the graphic code. The graphic extracted face feature vector data cannot be reverse processed to generate the image of the person's face. Then, the face derived feature vector data is compared to the graphic derived feature vector data to develop a similarity value. Then the similarity value is compared to a first threshold value to develop a validation indicator representing the likelihood that the graphic/item are associated with the person.
Inventors
- Kevin Alan Tussy
Assignees
- FACETEC, INC.
Dates
- Publication Date
- 20260512
- Application Date
- 20240404
Claims (19)
- 1 . A method of verifying that a graphic, located on an item, is associated with a person, the method comprising: capturing one or more images of the graphic on the item; processing at least one of the one or more images of the graphic to extract graphic-derived face data stored in the graphic, wherein the graphic-derived face data was generated using the one-way conversion processing of at least one image of a person's face; capturing at least one first image of a face of the person taken with a camera at a first location which is a first distance from the person; processing the at least one first image or a portion thereof to create first data; changing the distance between the face of the person and the camera from the first distance to a second distance, wherein the first distance is different than the second distance, and capturing at least one second image of the face of the person; processing the at least one second image or a portion thereof to create second data; comparing the first data to the second data to determine whether expected differences, resulting from the at least one first image and the at least one second image being captured at a different distance from the user, exist between the first data and the second data which indicate three-dimensionality of the person; if the expected differences indicate three-dimensional of the person, then: processing at least one of the at least one first image or the at least one second image of the face of the person to generate face-derived data representing the face of the person, the face-derived data being generated using one-way conversion processing; comparing the face-derived data to the graphic-derived face data to develop a similarity value; and based on the similarity value, generating an indicator representing the likelihood that the graphic is associated with the person.
- 2 . The method of claim 1 wherein the item is one of the following: driver's license, identification card, passport, diploma, education transcript, credit card, debit card, airline ticket, shipping label, event ticket, voting ballot, or title of ownership.
- 3 . The method of claim 1 wherein the face-derived data comprises a string of alpha and numeric values and the capturing one or more images of the graphic on the item may occur before or after the capturing at least one first image and the capturing at least second first image.
- 4 . The method of claim 1 wherein the item includes a picture of the person.
- 5 . The method of claim 4 wherein the method further comprises: capturing one or more images of the photograph on the item; processing at least one of the one or more images of the photograph to generate photograph-derived face data representing the face of the person in the photograph, the photograph-derived face data being generated using the one-way conversion processing; comparing the photograph-derived face data to the face-derived data, the graphic-derived data, or to both to develop a second similarity value; and based on the second similarity value, generating a second indicator representing the likelihood that the graphic is associated with the person in the photograph on the item.
- 6 . The method of claim 1 wherein the face-derived data is created by a trusted entity from a trusted face photo of the person.
- 7 . The method of claim 1 wherein the similarity value must be the same as or greater than a threshold value for the item to be associated with the person.
- 8 . The method of claim 1 further comprising transmitting the face-derived data, the graphic-derived data, or both to a trusted entity for comparing by the trusted entity of the face-derived data, the graphic-derived data, or both to trusted face data stored at the trusted entity.
- 9 . The method of claim 1 wherein the one-way conversion processing comprises using a hash-type function configured to prevent reverse-processing of the derived data to original data or an image of the person.
- 10 . A system for verifying an association between a graphic and a person, the graphic located on a document, the system comprising: at least one camera configured to capture images of a face of the person and a graphic image of the graphic located on the document; at least one processor configured to execute machine-readable code stored in a non-transitory state; and a memory storing the machine-readable code in the non-transitory state, the machine-readable code configured to: capture at least one first image of a face of the person taken with the camera of the computing device at a first location, which is a first distance from the face of the person; process the at least one first image or a portion thereof to create first data; capture at least one second image of the face of the person taken with the camera of the computing device at a second distance from the face of the person, the second distance being different than the first distance; process the at least one second image or a portion thereof to create second data; comparing the first data to the second data to determine whether expected differences exist between the first data and the second data which indicates liveness of the person; responsive to the comparing indicating liveness of the person, the machine-readable code further configured to: process at least one of the one or more images of the face of the person to generate face-derived data representing the face of the person, the face-derived data being generated using a one-way conversion process; capture one or more images of the graphic on the item; processing at least one of the one or more images of the graphic to extract graphic-derived face data stored in the graphic, wherein the graphic-derived face data was generated using the one-way conversion process; comparing the face-derived data to the graphic-derived face data to develop a similarity value; and based on the similarity value, generating an indicator representing the likelihood that the graphic is associated with the person.
- 11 . The system of claim 10 wherein the document comprises one of the following: driver's license, title of ownership, school transcript, transaction check, identification card, event ticket or voting ballot.
- 12 . The system of claim 10 wherein the face-derived code and the graphic-derived code comprise a string of alpha-numeric values and the one-way conversion process used to create the face-derived code and the graphic-derived code prevents the face-derived code or the graphic-derived code from being reverse processed to create an image of the face of the person.
- 13 . The system of claim 10 wherein the machine-readable code is further configured to: capture one or more images of the photograph on the item; process at least one of the one or more images of the photograph to generate photograph-derived face data representing the face of the person in the photograph, the photograph-derived face data being generated using the one-way conversion processing; compare the photograph-derived face data to the face-derived data, to the graphic-derived feature vector data, or to both to develop a second similarity value; and based on the second similarity value, generate a second indicator representing the likelihood that the graphic is associated with the person in the photograph on the item.
- 14 . The system of claim 10 wherein the machine-readable code is further configured to: transmit the face-derived code, the graphic-derived code, or both to a trusted entity; and the trusted entity comparing the face-derived code, the graphic-derived code, or both to trusted face codes stored or generated at the trusted entity.
- 15 . A method for verifying an association of an identification document with a person presenting the identification document comprising: capturing one or more images of a graphic on the identification document, the graphic comprising a two-dimensional machine readable code; processing at least one of the one or more images of the graphic to extract a graphic-derived code, the graphic-derived code comprising data representing an image of the person's face, wherein the graphic-derived code is generated using a one-way conversion process; transmitting the graphic-derived code to a trusted entity; at the trusted entity, processing the graphic-derived code sent to search for a matching trusted code, the trusted code comprising data previously generated from images of a person's face or an image of a person's face using the one-way conversion process, wherein the images of the person's face are captured and processed as follows: capturing at least one first image of a face of the person taken with a camera at a first location which is a first distance from the person; processing the at least one first image or a portion thereof to create first data; changing the distance between the face of the person and the camera from the first distance to a second distance, wherein the first distance is different than the second distance, and capturing at least one second image of the face of the person; processing the at least one second image or a portion thereof to create second data; comparing the first data to the second data to determine whether expected differences, resulting from the at least one first image and the at least one second image being captured at a different distance from the user, exist between the first data and the second data to evaluate three-dimensionality, liveness, or both of the person; comparing the graphic-derived code to the trusted code to develop a similarity value; and providing the similarity value to a requesting entity, the requesting entity comprising an entity that is requesting verification of an association of the identification document with the person presenting the identification document.
- 16 . The method of claim 15 wherein the identification document is a driver's license, passport, or an identification card.
- 17 . The method of claim 15 wherein the trusted entity is the entity that generated the identification document.
- 18 . The method of claim 15 wherein the identification document includes a photograph of a face, and the method further comprises: capturing one or more images of the photograph of the face on the identification document; processing at least one of the one or more images of the photograph of the face on the identification document to generate an identification photograph-derived code, the identification photograph-derived code generated using the one-way conversion process; comparing the identification photograph-derived code to the trusted code to develop a second similarity value; transmitting the second similarity value to the requesting entity.
- 19 . The method of claim 15 wherein the graphic code includes information from text contained on the identification document or from an identity record stored on an identity issuer's database.
Description
BACKGROUND 1. Field of the Invention The disclosed embodiments relate to biometric security. More specifically, the disclosed embodiments relate to facial recognition authentication systems. 2. Related Art With the growth of personal electronic devices that may be used to access many different user accounts, and the increasing threat of identity theft and other security issues, there is a growing need for ways to securely access user accounts via electronic devices. Account holders are thus often required to have longer passwords that meet various criteria such as using a mixture of capital and lowercase letters, numbers, and other symbols. With smaller electronic devices, such as smart phones, smart watches, “Internet of Things” (“IoT”) devices and the like, it may become cumbersome to attempt to type such long passwords into the device each time access to the account is desired and if another individual learns the user's password then the user can be impersonated without actually being present themselves. In some instances, users may even decide to deactivate such cumbersome security measures due to their inconvenience on their devices. Thus, users of such devices may prefer other methods of secure access to their user accounts. One other such method is with biometrics. For example, an electronic device may have a dedicated sensor that may scan a user's fingerprint to determine that the person requesting access to a device or an account is authorized. However, such fingerprint systems on small electronic devices, or are often considered unreliable and unsecure. In addition, facial recognition is generally known and may be used in a variety of contexts. Two-dimensional facial recognition is commonly used to tag people in images on social networks or in photo editing software. Facial recognition software, however, has not been widely implemented on its own to securely authenticate users attempting to gain access to an account because it not considered secure enough. For example, two-dimensional facial recognition is considered unsecure because faces may be photographed or recorded, and then the resulting prints or video displays showing images of the user may be used to trick the system. Accordingly, there is a need for reliable, cost-effective, and convenient method to authenticate users attempting to log in to, for example, a user account. SUMMARY The disclosed embodiments have been developed in light of the above and aspects of the invention may include a method for enrolling and authenticating a user in an authentication system via a user's a mobile computing device. The user's device includes a camera. In one embodiment, the user may enroll in the system by providing enrollment images of the user's face. The enrollment images are taken by the camera of the mobile device as the user moves the mobile device to different positions relative to the user's head. The user may thus obtain enrollment images showing the user's face from different angles and distances. The system may also utilize one or more movement sensors of a mobile device to determine an enrollment movement path that the phone takes during the imaging. At least one image is processed to detect the user's face within the image, and to obtain biometric information from the user's face in the image. The image processing may be done on the user's mobile device or at a remote device, such as an authentication server or a user account server. The enrollment information (the enrollment biometrics, movement, and other information) may be stored on the mobile device or remote device or both. The system may then authenticate a user by the user providing at least one authentication image via the camera of the mobile device while the user moves the mobile device to different positions relative to the user's head. The authentication images are processed for face detection and facial biometric information. Path parameters may also be obtained during the imaging of the authentication images (authentication movement). The authentication information (authentication biometric, movement, and other information) is then compared with the enrollment information to determine whether the user should be authenticated or denied. Image processing and comparison may be conducted on the user's mobile device, or may be conducted remotely. In some embodiments, multiple enrollment profiles may be created by a user to provide further security. For example, a user may create an enrollment wearing accessories such as a hat or glasses, or while making a funny face. In further embodiments, the user's enrollment information may be linked to a user's email address, phone number, or other unique identifier. The authentication system may include feedback displayed on the mobile device to aid a user in learning and authentication with the system. For instance, an accuracy meter may provide feedback on a match rate of the authentication biometrics or movement. A movement meter may pr