CN-116129484-B - Method, device, electronic equipment and storage medium for model training and living body detection
Abstract
The application provides a method, a device, electronic equipment and a storage medium for model training and living body detection, which comprise the steps of obtaining a marked image training sample set, wherein a training label of the image training sample comprises a face area label and a face area label, wherein the face area label is used for representing the face area characteristics of a prosthesis attack image sample; the false detection method comprises the steps of representing false frame region characteristics of a false attack image sample, representing false and true labels, representing false classification of the false attack image sample, training an initial living body detection model by utilizing an image training sample set, wherein the living body detection model comprises a coding network used for coding a human face region of the false attack image sample and the false frame region, a perception network used for fusing the human face region characteristics corresponding to the false attack image sample and the false frame region characteristics, and a classification network used for carrying out false and true identification on the fused characteristics of the false attack image sample to obtain a false and true identification result corresponding to the false attack image sample.
Inventors
- GAO LIANG
- ZHOU XUNYI
- ZENG DINGHENG
Assignees
- 马上消费金融股份有限公司
Dates
- Publication Date
- 20260512
- Application Date
- 20220725
Claims (10)
- 1. A method of model training, comprising: Acquiring an image training sample set, wherein the image training sample set comprises a plurality of prosthesis attack image samples and corresponding training labels, the training labels comprise face area labels, prosthesis border area labels and true-false labels, the face area labels are used for representing face area characteristics of the corresponding prosthesis attack image samples, the prosthesis border area labels are used for representing prosthesis border area characteristics of the corresponding prosthesis attack image samples, and the true-false labels are used for representing true-false classification of the corresponding prosthesis attack image samples; training an initial living body detection model by using the image training sample set to obtain a living body detection model; The living body detection model comprises a coding network, a perception network and a classification network, wherein the coding network is used for coding a face area and a false body border area of each false body attack image sample in the multiple false body attack image samples to obtain the face area characteristic and the false body border area characteristic corresponding to each false body attack image sample, the perception network is used for fusing the face area characteristic and the false body border area characteristic corresponding to each false body attack image sample to obtain the fusion characteristic corresponding to each false body attack image sample, and the classification network is used for carrying out true and false identification on the fusion characteristic of each false body attack image sample to obtain the true and false identification result corresponding to each false body attack image sample.
- 2. The method of claim 1, wherein the step of determining the position of the substrate comprises, The coding network comprises a first coding sub-network and a second coding sub-network, wherein the first coding sub-network is used for coding the face region of the prosthesis attack image sample to obtain corresponding face region characteristics, and the second coding sub-network is used for coding the prosthesis border region of the prosthesis attack image sample to obtain corresponding prosthesis border region characteristics.
- 3. The method of claim 2, wherein the step of determining the position of the substrate comprises, The face region label is a gray matrix corresponding to a first black-and-white binary rule of the prosthesis attack image sample, wherein in the first black-and-white binary rule, pixels of a face region adopt a first gray value, a non-face region adopts a second gray value, and the first coding sub-network is specifically used for coding the gray matrix of the prosthesis attack image sample according to the first black-and-white binary rule to obtain corresponding face region characteristics.
- 4. The method of claim 2, wherein the step of determining the position of the substrate comprises, The false body frame region label is a gray matrix corresponding to a second black-and-white binary rule of a false body attack image sample, wherein in the second black-and-white binary rule, pixels in the false body frame region adopt a third gray value, and non-false body frame regions adopt a fourth gray value; the second coding sub-network is specifically configured to perform gray matrix coding on the prosthesis attack image sample according to the second black-white binary rule, so as to obtain a corresponding prosthesis frame region feature.
- 5. The method of claim 2, wherein the step of determining the position of the substrate comprises, The training of the initial living body detection model by using the image training sample set comprises the following steps: determining a loss function of the first coding sub-network based on the difference between the face region features and the face region labels corresponding to each of the prosthesis attack image samples, and Determining a loss function of the second coding sub-network based on the difference between the false border region characteristics corresponding to each false attack image sample and the false border region labels and the difference between the true and false identification results and the true and false labels, and Determining a loss function of the classification network based on the difference between the true and false identification result corresponding to each false attack image sample and the true and false label; Determining a total loss function of the living detection model based on the loss functions of the first encoding sub-network, the second encoding sub-network, and the classification network; a training gradient of the living detection model is determined based on a total loss function of the living detection model.
- 6. The method of claim 5, wherein the step of determining the position of the probe is performed, The loss functions of the first coding sub-network and the second coding sub-network are mean square error loss functions, and the loss functions of the classification network are cross quotient loss functions.
- 7. A living body detecting method, characterized by comprising: responding to a living body detection request initiated by a target user, and acquiring a face shooting image of the target user; Inputting the face shooting image of the target user to a living body detection model to obtain an authenticity identification result corresponding to the face shooting image of the target user, wherein the living body detection model is trained and obtained based on the method of any one of claims 1-6, and the living body detection model is used for encoding the face shooting image into corresponding face area characteristics and prosthesis border area characteristics, fusing the face area characteristics and the prosthesis border area characteristics into fusion characteristics, and then carrying out authenticity identification on the face shooting image based on the fusion characteristics.
- 8. A living body detecting device, characterized by comprising: the shooting image acquisition module is used for responding to a living body detection request initiated by a target user to acquire a face shooting image of the target user; The authentication identification module inputs the face shot image of the target user to a living body detection model to obtain an authentication identification result corresponding to the face shot image of the target user, wherein the living body detection model is trained and obtained based on the method of any one of claims 1-6, and the living body detection model is used for encoding the face shot image into corresponding face area characteristics and false body frame area characteristics, fusing the face area characteristics and the false body frame area characteristics into fusion characteristics, and then carrying out authentication identification on the face shot image based on the fusion characteristics.
- 9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the computer program is executed by the processor to perform the method of any one of claims 1 to 7.
- 10. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, implements the method of any of claims 1 to 7.
Description
Method, device, electronic equipment and storage medium for model training and living body detection Technical Field The application belongs to the technical field of image processing, and particularly relates to a method, a device, electronic equipment and a storage medium for model training and living body detection. Background With the continuous development of the biological recognition technology and the artificial intelligence technology, the face recognition technology is widely applied, and the processes of identity authentication such as payment, access control, security check and the like are greatly simplified. In practical application, the human face is used as an open biological feature, is easy to be utilized by a malicious person, and presents the human face image of a legal user by using a prosthesis medium, thereby impersonating the legal user to initiate human face recognition. This act of recognizing the impersonation of other user identities with the prosthetic medium is called a prosthetic attack. For this reason, how to automatically and efficiently identify the false attack of the face image by a machine has become a urgent problem in the industry. Disclosure of Invention The application aims to provide a method, a device, electronic equipment and a storage medium for model training and living body detection, which can carry out living body detection on a face image based on a machine and can be used for resisting prosthesis attack by a face recognition system. In order to achieve the above object, embodiments of the present application are realized as follows: in a first aspect, a model training method is provided, including: Acquiring an image training sample set, wherein the image training sample set comprises a plurality of prosthesis attack image samples and corresponding training labels, the training labels comprise face area labels, prosthesis border area labels and true-false labels, the face area labels are used for representing face area characteristics of the corresponding prosthesis attack image samples, the prosthesis border area labels are used for representing prosthesis border area characteristics of the corresponding prosthesis attack image samples, and the true-false labels are used for representing true-false classification of the corresponding prosthesis attack image samples; training an initial living body detection model by using the image training sample set to obtain a living body detection model; The living body detection model comprises a coding network, a perception network and a classification network, wherein the coding network is used for coding a face area and a false body border area of each false body attack image sample in the multiple false body attack image samples to obtain the face area characteristic and the false body border area characteristic corresponding to each false body attack image sample, the perception network is used for fusing the face area characteristic and the false body border area characteristic corresponding to each false body attack image sample to obtain the fusion characteristic corresponding to each false body attack image sample, and the classification network is used for carrying out true and false identification on the fusion characteristic of each false body attack image sample to obtain the true and false identification result corresponding to each false body attack image sample. In a second aspect, there is provided a living body detection method including: responding to a living body detection request initiated by a target user, and acquiring a face shooting image of the target user; The method comprises the steps of inputting a face shooting image of a target user to a living body detection model to obtain an authenticity identification result corresponding to the face shooting image of the target user, training the living body detection model based on the method of the first aspect, encoding the face shooting image into a corresponding face area feature and a false body border area feature by the living body detection model, fusing the face area feature and the false body border area feature into a fusion feature, and carrying out authenticity identification on the face shooting image based on the fusion feature. In a third aspect, there is provided a living body detection apparatus comprising: the shooting image acquisition module is used for responding to a living body detection request initiated by a target user to acquire a face shooting image of the target user; The authentication identification module inputs the face shot image of the target user to a living body detection model to obtain an authentication identification result corresponding to the face shot image of the target user, wherein the living body detection model is trained based on the method of the first aspect, the living body detection model is used for encoding the face shot image into corresponding face area features and false body border area