Search

EP-3825905-B1 - METHOD AND APPARATUS WITH LIVENESS TEST AND/OR BIOMETRIC AUTHENTICATION, COMPUTER PROGRAM THEREFORE AND MEDIUM STORING THE SAME

EP3825905B1EP 3825905 B1EP3825905 B1EP 3825905B1EP-3825905-B1

Inventors

  • KWAK, Youngjun
  • HAN, SEUNGJU
  • KO, Minsu
  • KIM, YOUNGSUNG
  • KIM, Heewon
  • SONG, JU HWAN
  • YOO, BYUNGIN
  • RHEE, SEON MIN
  • LEE, YONG-IL
  • CHOI, JIHO

Dates

Publication Date
20260506
Application Date
20201014

Claims (14)

  1. A processor-implemented liveness-test method, comprising: detecting (420) a face region (620) in an infrared, 'IR', image (610) including an object and detecting feature points in the face region (630); generating (430) a preprocessed infrared, 'IR', image by performing (430) first preprocessing based on the IR image; generating a preprocessed depth image based on a depth image (910) that includes the object; and determining (490) whether the object is a genuine object based on the preprocessed IR image and the preprocessed depth image, wherein said generating of a preprocessed depth image comprises determining a face region (920) and feature points (925) of the object in the depth image by mapping, to the depth image, the face region and feature points detected in the IR image, and performing (470) second preprocessing on the depth image; said second preprocessing comprising: determining a correspondence between the face region and the feature points of the depth image and a predefined or predetermined face region and reference points (935); determining (940) a transformation matrix based on a positional relationship between the determined feature points of the depth image and the respectively corresponding reference points; and transforming a pose of the object through an application (950) of the determined transformation matrix to the depth image or the face region of the depth image.
  2. The method of claim 1, wherein the determining of whether the object is the genuine object comprises determining whether the object is an animate object.
  3. The method of claim 1 or 2, wherein the determining of whether the object is a genuine object comprises: determining a first liveness score by inputting the preprocessed IR image as an input to a neural network-based first liveness test model, determining a second liveness score by inputting the preprocessed depth image to a neural network-based second liveness test model, and determining whether the object is a genuine object based on the first liveness score and the second liveness score, or determining a third liveness score by inputting the preprocessed IR image and the preprocessed depth image to a third neural network-based liveness test model, and determining whether the object is a genuine object based on the third liveness score.
  4. A processor-implemented biometric authentication method, comprising: detecting a face region in an infrared, 'IR', image including an object and detecting feature points in the face region (630); generating a preprocessed infrared, 'IR', image by performing first preprocessing based on the IR image; generating a preprocessed depth image based on a depth image that includes the object; and determining whether authentication of the object is successful based on the preprocessed IR image and the preprocessed depth image, wherein said generating of a preprocessed depth image comprises determining a face region and feature points of the object in the depth image by mapping, to the depth image, the face region and feature points detected in the IR image, and performing second preprocessing on the depth image; said second preprocessing comprising: determining a correspondence between the face region and the feature points of the depth image and a predefined or predetermined face region and reference points; determining a transformation matrix based on a positional relationship between the determined feature points of the depth image and the respectively corresponding reference points; and transforming a pose of the object through an application of the determined transformation matrix to the depth image or the face region of the depth image.
  5. The method of claim 4, wherein the determining whether authentication of the object is successful comprises: determining a first similarity between a first feature extracted from the preprocessed IR image and a first enrolled feature of a valid user, determining a second similarity between a second feature extracted from the preprocessed depth image and a second enrolled feature of the valid user; and determining whether authentication of the object is successful based on the first and second similarities, or determining a third similarity between a third feature extracted from the preprocessed IR image and the preprocessed depth image and a third enrolled feature of the valid user and determining whether authentication of the object is successful based on the third similarity.
  6. The method of claim 1, 2, or 3, in combination with the method of claim 4, or 5, wherein the steps of determining whether authentication of the object is successful and determining whether the object is a genuine object, respectively, are based on the same preprocessed IR image and the preprocessed depth image.
  7. The method of claim any of the preceding claims, further comprising acquiring the IR image including the object and the depth image including the object.
  8. The method of claim any of the preceding claims, wherein the generating of the preprocessed IR image comprises generating the preprocessed IR image such that an edge component of the IR image is emphasized in the preprocessed IR image.
  9. The method of claim any of the preceding claims, wherein the generating of the preprocessed IR image comprises generating a first intermediate image based on pixel values of a current pixel and neighboring pixels of the current pixel in the IR image, generating a second intermediate image by performing normalization on the IR image, and generating the preprocessed IR image based on the IR image, the first intermediate image, and the second intermediate image, wherein a pixel of the preprocessed IR image preferably includes a pixel value of the current pixel in the IR image, a pixel value of a pixel at a corresponding position in the first intermediate image, and a pixel value of a pixel at a corresponding position in the second intermediate image.
  10. The method of claim 9, wherein the generating of the first intermediate image comprises combining a pixel value of the current pixel in the IR image and a pixel value of a pixel at a corresponding position in a generated single-channel IR image; and, wherein the generated single-channel IR image is preferably generated by combining a pixel value of a pixel at the corresponding position in a first channel IR image, a pixel value of a pixel at the corresponding position in a second channel IR image, a pixel value of a pixel at the corresponding position in a third channel IR image, and a pixel value of a pixel at the corresponding position in a fourth channel IR image, and wherein more preferably the pixel value of the pixel in the first channel IR image is a pixel value of a pixel positioned immediately above the current pixel in the IR image, the pixel value of the pixel in the second channel IR image is a pixel value of a pixel positioned immediately below the current pixel in the IR image, the pixel value of the pixel in the third channel IR image is a pixel value of a pixel positioned immediately to the left of the current pixel in the IR image, and the pixel value of the pixel in the fourth channel IR image is a pixel value of a pixel positioned immediately to the right of the current pixel in the IR image.
  11. The method of claim any of the preceding claims, wherein the generating of the preprocessed depth image comprises: determining feature points of the object in the depth image; and performing the second preprocessing by performing either one or both of a translation and a rotation of the object in the depth image based on the determined feature points.
  12. An apparatus (120), comprising one or more image sensors (130) configured to acquire an infrared, 'IR', image including an object and a depth image including the object, and a processor configured to perform the method of any of the preceding claims.
  13. A computer program comprising instructions that when the instructions are executed by the apparatus of claim 12 cause the apparatus to execute the steps of the method of any of the claims 1-11.
  14. A computer-readable medium having stored thereon the computer program of claim 13.

Description

BACKGROUND 1. Field The application concerns a method and apparatus with liveness test and/or biometric authentication. The application further concerns a computer program comprising instructions to cause such an apparatus to execute the steps of such a method, and a computer-readable medium having stored thereon said computer program. 2. Description of Related Art In a user authentication system, a computing device may determine whether to allow an access to the computing device based on authentication information provided by a user. The authentication information may include a password input by the user or biometric information of the user. The biometric information may include information related to a fingerprint, an iris, and/or a face. Face anti-spoofing technology may verify whether a face of a user input into the computing device is a fake face or a genuine face. For this, features such as Local Binary Patterns (LBP), Histogram of Oriented Gradients (HOG), and Difference of Gaussians (DoG) may be extracted from the input image, and whether the input face is a fake face may be determined based on the extracted features. Face spoofing may include attacks (or attempts to have the face anti-spoofing technology improperly determine that a fake face user input is a genuine face) using a photo, a video, or a mask. Document US 2019/251237 Al (PARK SUNGUN [KR] ET AL) 15 August 2019 discloses a liveness-test method using a matching between an image and a depth image. SUMMARY This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. In one general aspect, a processor-implemented method includes: generating a preprocessed infrared (IR) image by performing first preprocessing based on an IR image including an object; generating a preprocessed depth image by performing second preprocessing based on a depth image including the object; and determining whether the object is a genuine object based on the preprocessed IR image and the preprocessed depth image. The method may include acquiring the IR image including the object and the depth image including the object. The determining of whether the object is the genuine object may include determining whether the object is an animate object. The generating of the preprocessed IR image may include generating the preprocessed IR image such that an edge component of the IR image is emphasized in the preprocessed IR image. The generating of the preprocessed IR image may include: generating a first intermediate image based on pixel values of a current pixel and neighboring pixels of the current pixel in the IR image; generating a second intermediate image by performing normalization on the IR image; and generating the preprocessed IR image based on the IR image, the first intermediate image, and the second intermediate image. A pixel of the preprocessed IR image may include a pixel value of the current pixel in the IR image, a pixel value of a pixel at a corresponding position in the first intermediate image, and a pixel value of a pixel at a corresponding position in the second intermediate image. The generating of the first intermediate image may include combining a pixel value of the current pixel in the IR image and a pixel value of a pixel at a corresponding position in a generated single-channel IR image. The generated single-channel IR image may be generated by combining a pixel value of a pixel at the corresponding position in a first channel IR image, a pixel value of a pixel at the corresponding position in a second channel IR image, a pixel value of a pixel at the corresponding position in a third channel IR image, and a pixel value of a pixel at the corresponding position in a fourth channel IR image. The pixel value of the pixel in the first channel IR image may be a pixel value of a pixel positioned immediately above the current pixel in the IR image, the pixel value of the pixel in the second channel IR image may be a pixel value of a pixel positioned immediately below the current pixel in the IR image, the pixel value of the pixel in the third channel IR image may be a pixel value of a pixel positioned immediately to the left of the current pixel in the IR image, and the pixel value of the pixel in the fourth channel IR image may be a pixel value of a pixel positioned immediately to the right of the current pixel in the IR image. The generating of the preprocessed depth image may include: determining feature points of the object in the depth image; and performing the second preprocessing by performing either one or both of a translation and a rotation of the object in the depth image based on the determined feature points. The generating of the preprocessed depth image may in