CN-116188848-B - Certificate classification verification method and device, electronic equipment and storage medium
Abstract
The application provides a method, a device, electronic equipment and a storage medium for verifying certificate class, wherein the method comprises the steps of obtaining pictures to be processed, carrying out face recognition processing on the pictures to be processed to obtain the number of faces, carrying out classification processing on the pictures to be processed to obtain first confidence coefficient when the number of faces is a first threshold value, carrying out background segmentation on the pictures to be processed to obtain first sub-confidence coefficient, carrying out face feature point extraction on the pictures to be processed to obtain second sub-confidence coefficient, carrying out text recognition processing on the pictures to be processed to obtain third sub-confidence coefficient, obtaining second confidence coefficient according to the first sub-confidence coefficient, the second sub-confidence coefficient and the third sub-confidence coefficient, and determining whether the pictures to be processed are certificates or not according to the first confidence coefficient and the second confidence coefficient.
Inventors
- LI ZHI
- Zhan Nianji
- WU WEN
- NA YINGQUAN
Assignees
- 招联消费金融有限公司
Dates
- Publication Date
- 20260508
- Application Date
- 20230110
Claims (11)
- 1. A method of credential category verification, the method comprising: acquiring a picture to be processed; performing face recognition processing on the pictures to be processed to obtain the number of faces; When the number of faces is a first threshold value, classifying the pictures to be processed to obtain a first confidence coefficient; The method comprises the steps of carrying out background segmentation on a picture to be processed to obtain a first sub-confidence, carrying out background segmentation on the picture to be processed to obtain a sub-background picture, carrying out gray processing on the sub-background picture to obtain a gray value picture corresponding to the sub-background picture, calculating the average value of pixel points in the gray value picture, calculating the number of the pixel points in the gray value picture, which is larger than or equal to a first threshold value and smaller than or equal to a second threshold value, wherein the first threshold value is smaller than the average value, the second threshold value is larger than the average value, calculating the ratio of the number of the pixel points in the gray value picture, which is larger than or equal to the first threshold value, to the number of all the pixel points in the gray value picture, and taking the ratio as the first sub-confidence; Extracting facial feature points of the picture to be processed to obtain a second sub confidence coefficient; performing word recognition processing on the picture to be processed to obtain a third sub confidence coefficient; Obtaining a second confidence coefficient according to the first sub-confidence coefficient, the second sub-confidence coefficient and the third sub-confidence coefficient; And determining whether the picture to be processed is a certificate photo or not according to the first confidence coefficient and the second confidence coefficient, wherein the determining that the picture to be processed is the certificate photo comprises determining that the picture to be processed is the certificate photo when the first confidence coefficient and the second confidence coefficient both fall into a target confidence coefficient interval, otherwise determining that the picture to be processed is not the certificate photo.
- 2. The method according to claim 1, wherein the classifying the to-be-processed picture to obtain a first confidence level includes: Extracting features of the picture to be processed to obtain initial features, wherein the initial features comprise k sub-features; Performing attention processing on the picture to be processed to obtain a weight coefficient of each sub-feature; Weighting the k sub-features according to the weight coefficient of each sub-feature to obtain a target feature; And obtaining a first confidence coefficient according to the target characteristics.
- 3. The method according to claim 2, wherein the feature extraction of the to-be-processed picture to obtain the initial feature includes: acquiring pixel characteristics of the picture to be processed in an RGB color space to obtain x sub-characteristics; Acquiring color characteristics of the picture to be processed in an HSV color space to obtain y sub-characteristics; performing wavelet transformation on the picture to be processed to obtain z sub-features; and splicing the x sub-features, the y sub-features and the z sub-features to obtain the initial feature, wherein k=x+y+z.
- 4. A method according to any of claims 1-3, characterized in that the classification of the picture to be processed is performed by means of a document classification model, which is trained by the following steps: acquiring a training image; Acquiring a plurality of spectrograms of the training image; inputting the training image into an initial model to obtain a plurality of feature images, wherein the plurality of spectrograms are in one-to-one correspondence with the plurality of feature images, and the size of any one spectrogram is the same as the size of a first feature image corresponding to the spectrogram; Calculating the mean square error of the multiple spectrograms and the multiple feature images to obtain multiple mean square errors; Taking the average value of the plurality of mean square errors as a first loss; determining cross entropy loss of the plurality of feature maps according to the plurality of feature maps, and taking the cross entropy loss as a second loss; obtaining a target loss according to the first loss and the second loss; And training the initial model according to the target loss to obtain the certificate classification model.
- 5. The method of claim 1, wherein the extracting the face feature point of the to-be-processed picture to obtain the second sub-confidence comprises: Extracting facial feature points of the picture to be processed to obtain a plurality of facial feature points; selecting a plurality of target face feature points from the plurality of face feature points; Acquiring a plurality of two-dimensional pixel coordinates of the plurality of target face feature points, wherein the plurality of target face feature points and the plurality of two-dimensional pixel coordinates are in one-to-one correspondence; Calculating rotation vectors of a plurality of three-dimensional space coordinates corresponding to a plurality of face feature points in the credential photo face one by one; determining a rotation matrix according to the rotation vector; determining an Euler distance according to the rotation matrix, wherein the Euler distance is used for representing the rotation angle of a face in the picture to be processed; and normalizing the rotation angle of the face to obtain a second sub-confidence coefficient.
- 6. The method of claim 1, wherein the performing text recognition processing on the to-be-processed picture to obtain a third sub-confidence comprises: performing word recognition processing on the picture to be processed to obtain a first text; Word segmentation is carried out on the first text to obtain at least one word; comparing the at least one word with words in a regular template respectively to obtain the number of words matched with the regular template in the at least one word; And calculating the ratio of the number of characters of the words matched with the regular template in the at least one word to the total number of characters of the regular template, and determining the ratio as the third sub-confidence.
- 7. The method of claim 1, wherein the deriving a second confidence level from the first sub-confidence level, the second sub-confidence level, and the third sub-confidence level comprises: and respectively weighting the first sub-confidence coefficient, the second sub-confidence coefficient and the third sub-confidence coefficient with the coefficient to obtain the second confidence coefficient.
- 8. The method of claim 1, wherein the determining whether the picture to be processed is a credential based on the first confidence and the second confidence comprises: When the first confidence coefficient and the second confidence coefficient both fall into a target confidence coefficient interval, determining that the picture to be processed is a certificate photo; otherwise, determining that the picture to be processed is not a certificate.
- 9. A credential category verification device, comprising: The acquisition unit is used for acquiring the picture to be processed; the processing unit is used for carrying out face recognition processing on the pictures to be processed to obtain the number of faces; The processing unit is further used for inputting the pictures to be processed into the certificate classification model to obtain a first confidence coefficient when the number of faces is a first threshold value; The processing unit is also used for carrying out background segmentation on the picture to be processed to obtain a first sub-confidence, and comprises the steps of carrying out background segmentation on the picture to be processed to obtain a sub-background picture, carrying out gray processing on the sub-background picture to obtain a gray value picture corresponding to the sub-background picture, calculating the average value of pixel points in the gray value picture, and calculating the number of the pixel points in the gray value picture, wherein the first threshold value is smaller than the average value and the second threshold value is larger than the average value, calculating the ratio of the number of the pixel points in the gray value picture, which is larger than or equal to the first threshold value, to the number of all the pixel points in the gray value picture, and taking the ratio as the first sub-confidence; the processing unit is also used for extracting facial feature points of the picture to be processed to obtain a second sub confidence; the processing unit is also used for carrying out word recognition processing on the picture to be processed to obtain a third sub confidence; the processing unit is further configured to obtain a second confidence coefficient according to the first sub-confidence coefficient, the second sub-confidence coefficient and the third sub-confidence coefficient; The processing unit is further configured to determine whether the to-be-processed picture is a certificate according to the first confidence coefficient and the second confidence coefficient, where the determining that the to-be-processed picture is a certificate includes determining that the to-be-processed picture is a certificate when both the first confidence coefficient and the second confidence coefficient fall into a target confidence coefficient interval, and determining that the to-be-processed picture is not a certificate otherwise.
- 10. An electronic device comprising a processor and a memory, the processor being coupled to the memory, the memory being for storing a computer program, the processor being for executing the computer program stored in the memory to cause the electronic device to perform the method of any one of claims 1-8.
- 11. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program which is executed by a processor to implement the method of any one of claims 1-8.
Description
Certificate classification verification method and device, electronic equipment and storage medium Technical Field The present application relates to the field of image processing, and in particular, to a method, an apparatus, an electronic device, and a storage medium for verifying a certificate class. Background Currently, a user will upload various items of information about the user before loaning. The picture uploaded by the user is included. In order to verify whether the picture uploaded by the user is a certificate, various aspects of verification are required to be performed on the picture uploaded by the user. The traditional verification method is to utilize a neural network model for verification, and confirm whether the picture uploaded by the user is a certificate or not according to the confidence level output by the model. However, the neural network model can only accurately verify whether the picture uploaded by the user is a certificate photograph in some specific scenes, for example, only accurately verify the high-quality picture uploaded by the user. Therefore, it is difficult to ensure that whether the picture in the whole scene is the certificate or not is accurately verified by using the neural network, and the loan risk is increased. Disclosure of Invention Aiming at the problems, the application provides a method, a device, electronic equipment and a storage medium for verifying the category of the credentials, which are used for jointly verifying whether the pictures are credentials through a plurality of methods, so that the verification accuracy is improved, whether the pictures uploaded by the user in each scene are credentials can be accurately verified, and the loan risk is reduced. To achieve the above object, a first aspect of an embodiment of the present application provides a method for verifying a certificate authority category, including: acquiring a picture to be processed; Carrying out face recognition processing on the picture to be processed to obtain the number of faces; When the number of faces is a first threshold value, classifying the pictures to be processed to obtain a first confidence coefficient; performing background segmentation on the picture to be processed to obtain a first sub-confidence coefficient; extracting facial feature points of the picture to be processed to obtain a second sub confidence coefficient; Performing character recognition processing on the picture to be processed to obtain a third sub-confidence coefficient; obtaining a second confidence coefficient according to the first sub-confidence coefficient, the second sub-confidence coefficient and the third sub-confidence coefficient; and determining whether the picture to be processed is a certificate according to the first confidence coefficient and the second confidence coefficient. With reference to the first aspect, in a possible implementation manner, the method performs classification processing on the to-be-processed picture to obtain a first confidence, and further includes: extracting features of the picture to be processed to obtain initial features, wherein the initial features comprise k sub-features; Performing attention processing on the target feature map to obtain a weight coefficient of each sub-feature; Weighting the k sub-features according to the weight coefficient of each sub-feature to obtain a target feature; and obtaining a first confidence coefficient according to the target characteristics. With reference to the first aspect, in one possible implementation manner, feature extraction is performed on a picture to be processed to obtain initial features, including: acquiring pixel characteristics of a picture to be processed in an RGB color space to obtain x sub-characteristics; Acquiring color characteristics of a picture to be processed in an HSV color space to obtain y sub-characteristics; Performing wavelet transformation on the picture to be processed to obtain z sub-features; And splicing the x sub-features, the y sub-features and the z sub-features to obtain an initial feature, wherein k=x+y+z. With reference to the first aspect, in one possible implementation manner, the classifying treatment on the to-be-processed picture is implemented by using a certificate photo classification model, and the certificate photo classification model is trained by the following steps that: acquiring a training image; acquiring a plurality of spectrograms of a training image; inputting a training image into an initial model to obtain a plurality of feature images, wherein the plurality of spectrograms are in one-to-one correspondence with the plurality of feature images, and the size of any one spectrogram is the same as the size of a first feature image corresponding to the spectrogram; Calculating the mean square error of the multiple spectrograms and the multiple feature images to obtain multiple mean square errors; taking the average value of the plurality of mean squar