CN-121982760-A - Anti-cheating method integrating light analysis and background scene collaborative verification
Abstract
The invention relates to an anti-cheating method integrating light analysis and background scene collaborative verification, which belongs to the technical field of computer vision and biological recognition, and comprises the steps of obtaining attendance image data, carrying out living body feature extraction and verification, simultaneously carrying out blue light feature and reflective deformation feature extraction and analysis, introducing a feature weight distribution mechanism to generate an anti-counterfeiting living body verification conclusion, constructing an image base validity verification mechanism, comprehensively extracting and analyzing face occupation ratio and background integrity, synchronizing space-time distribution features to generate an image base validity assessment conclusion, extracting background scene features, carrying out consistency verification, constructing an attendance scene light collaborative depth verification model to generate a scene adaptation analysis report, presetting a historical image repeated investigation verification database, comparing the similarity of current and historical attendance image data, and combining a time stamp association verification mechanism to generate a repeated acquisition investigation opinion result. The method improves the accuracy and reliability of attendance anti-cheating.
Inventors
- WANG NAN
- KANG JIAN
Assignees
- 上海紫鸾网络科技有限公司
Dates
- Publication Date
- 20260505
- Application Date
- 20260122
Claims (12)
- 1. A method for preventing cheating by combining light analysis and background scene collaborative verification is characterized by comprising the following steps: S1, acquiring attendance image data, performing living body feature extraction and verification, simultaneously performing blue light feature and reflective deformation feature extraction and analysis, introducing a feature weight distribution mechanism to fuse living body feature verification results, blue light feature and reflective deformation feature analysis results, and generating an anti-counterfeiting living body verification conclusion; S2, constructing an image foundation effectiveness verification mechanism, comprehensively extracting and analyzing the human face duty ratio and the background integrity based on the attendance image data, synchronizing the space-time distribution characteristics of background light, carrying out matching verification on the space-time distribution characteristics and the environment light reference information of a corresponding time period of an attendance occurrence area, and generating an image foundation effectiveness evaluation conclusion by combining a preset verification deviation threshold; s3, presetting a scene adaptation verification benchmark, extracting background scene characteristics based on the attendance image data, carrying out consistency verification with the preset attendance scene benchmark scene characteristics, constructing an attendance scene light cooperative depth verification model, carrying out scene depth analysis based on consistency verification results, and generating a scene adaptation analysis report; and S4, presetting a historical image repeated checking and checking database, comparing the similarity of the current attendance image data with the historical attendance image data, and performing cross checking on the real-time property of the current attendance image information by combining a timestamp association checking mechanism based on the similarity comparison result to generate a repeated acquisition checking opinion result.
- 2. The method of claim 1, wherein the specific process of performing living body feature extraction and verification comprises the steps of performing living body feature region positioning on the attendance image data, synchronously extracting dynamic living body features and static living body features, wherein the dynamic living body features are facial micro-motion associated features, the static living body features are facial texture features and facial contour features, performing preprocessing noise reduction and feature enhancement on the extracted dynamic living body features and static living body features, performing dimension-by-dimension comparison on the preprocessed living body features and a preset living body feature reference library, and calculating feature matching degree.
- 3. The method of claim 1, wherein the specific process of extracting and analyzing the blue light features and the reflective deformation features comprises the steps of preprocessing noise reduction and contrast enhancement based on the attendance image data, locating blue light distribution ranges of faces and peripheral areas in the image, extracting intensity gradients and distribution densities of blue light, synchronously identifying high-light reflective areas in the image, extracting area occupation ratios, brightness peaks and edge blurs of the reflective areas, detecting deformation offset of face and background contours, comparing the extracted blue light features with preset normal environment blue light feature references, matching the reflective deformation features with preset real scene reflective deformation threshold ranges, and outputting analysis results of the blue light features and the reflective deformation features.
- 4. The method of claim 1, wherein the characteristic weight distribution mechanism specifically comprises the steps of presetting basic weight proportion of living body characteristic verification results, blue light characteristic analysis results and reflection deformation characteristic analysis results, constructing a scene weight adjustment model, acquiring relevant factors affecting characteristic effectiveness of a current attendance scene in real time, dynamically correcting the basic weight proportion, evaluating extraction integrity and analysis stability of the characteristics, and fusing the corrected characteristic results by adopting a weighted summation algorithm to generate an anti-counterfeiting living body verification conclusion.
- 5. The method of claim 3, wherein the specific process of constructing the scene weight adjustment model is to obtain association factors influencing feature effectiveness under different attendance scenes, record an optimal weight distribution scheme corresponding to features under the scenes, generate a scene-weight association sample set, establish a corresponding rule between the association factors and feature weight duty ratio based on the scene-weight association sample set, and verify the corresponding rule through the scene-weight association sample set.
- 6. The method of claim 1, wherein the specific process of constructing the image basic validity checking mechanism comprises the steps of firstly obtaining qualified and unqualified attendance image samples, sorting and generating an image validity checking sample library, meanwhile summarizing ambient light reference information of different time periods and establishing a reference information library, extracting a face duty ratio qualified range and a background integrity judging standard, setting a matching rule and a checking deviation threshold of space-time distribution characteristics of background light by combining the reference information library, integrating and generating a collaborative checking logic of face duty ratio, background integrity and light characteristics, adopting sample library data to test and adjust the matching rule and the checking deviation threshold, and continuously optimizing by combining follow-up actual attendance data.
- 7. The method of claim 1, wherein the specific process of generating the image base validity assessment conclusion comprises the steps of performing comprehensive extraction analysis of face proportion and background integrity and synchronous extraction of space-time distribution characteristics of background light on the attendance image data based on the image base validity verification mechanism, performing matching verification on the space-time distribution characteristics of the background light and environment light reference information of a corresponding time period of an attendance occurrence area, recording passing states and corresponding deviation data of verification items, comprehensively judging verification item results by combining a preset verification deviation threshold value, rechecking and confirming judgment results, and generating the image base validity assessment conclusion.
- 8. The method of claim 1, wherein the specific process of constructing the attendance scene light cooperative depth verification model is that reference scene features of a preset attendance scene, background scene features of different attendance images, corresponding consistency verification results and scene depth analysis standard conclusions are summarized to generate a basic sample set of model training, an association judgment rule between the consistency verification results and scene depth analysis is established based on the basic sample set, and the association judgment rule is verified and optimized through the basic sample set.
- 9. The method of claim 1, wherein the specific process of generating the scene adaptation analysis report includes the steps of firstly obtaining a background scene feature extraction result of an attendance image, a consistency check result of a preset scene adaptation check standard and a scene depth analysis result output by the attendance scene light cooperation depth check model, analyzing a scene type corresponding to a current attendance image, an adaptation level of the standard scene and a scene matching core conclusion obtained by depth analysis, and integrating to generate the scene adaptation analysis report.
- 10. The method of claim 1, wherein the specific process of comprehensively comparing the current attendance image data with the historical attendance image data is that the historical attendance image data associated with the current attendance image data is called based on the historical image repeated checking verification database, core features of the current attendance image data and the historical attendance image data are extracted, multidimensional comparison is carried out, feature matching degree is calculated, and similarity comparison results are integrated and generated.
- 11. The method of claim 1, wherein the timestamp association verification mechanism specifically comprises the steps of firstly obtaining a generation timestamp and a transmission timestamp of current attendance image data, simultaneously calling a corresponding timestamp of the historical attendance image data from the historical image repeated checking verification database, performing time sequence association comparison on the timestamp of the current attendance image data and the timestamp of the historical attendance image data, establishing a cooperative verification logic of a similarity comparison result and a timestamp comparison result, setting a corresponding timestamp validity judgment standard according to the similarity level of the current attendance image data and the historical attendance image data, adding a timestamp abnormality processing rule, and identifying and marking the conditions of timestamp deficiency, tampering and time sequence contradiction.
- 12. The method of claim 1, wherein the specific process of generating the repeated acquisition of the investigation opinion results comprises the steps of firstly acquiring a similarity comparison result of current attendance image data and historical attendance image data, associating a verification result with a time stamp, and then combining the two results to judge, namely finishing a judging conclusion, the similarity and a core basis of the time stamp, and generating the repeated acquisition of the investigation opinion results.
Description
Anti-cheating method integrating light analysis and background scene collaborative verification Technical Field The invention belongs to the technical field of computer vision and biological recognition, and particularly relates to an anti-cheating method integrating light analysis and background scene collaborative verification. Background Along with the rapid development of the biological recognition technology, the face recognition is widely applied to the field of attendance management by virtue of the advantages of non-contact interaction, high recognition efficiency and strong suitability. In the intensive places of personnel such as enterprises, schools, parks, etc., face identification attendance system replaces modes such as traditional punching card, fingerprint identification gradually, has promoted automation level and the convenience of attendance management by a wide margin, becomes the core infrastructure of standardization attendance order. However, in the actual application process, various cheating behaviors are frequent, and the authenticity and the management fairness of the attendance data are seriously damaged. Common cheating means comprise using face photos of other people, recording videos to replace personal attendance, simulating real faces by means of fake 3D face models, shooting image submissions in unspecified attendance scenes, multiplexing historical attendance image falsification time stamps and the like. The cheating actions not only lead to the distortion of the attendance data, but also easily cause management problems such as compensation accounting deviation, disordered working order and the like, and strict requirements are put on the anti-cheating capability of the attendance system. In order to cope with the problems, the core of the existing anti-cheating scheme is constructed around a living body detection technology, and the real face and the fake face are distinguished mainly by extracting dynamic or static characteristics of the face. However, the prior art has obvious defects, is difficult to meet the all-round anti-cheating requirement, and is specifically expressed as follows: The existing scheme depends on single dynamic or static facial feature verification, does not design a special verification mechanism aiming at the ambient light interference, is easy to cause facial feature extraction distortion when the conditions of blue light supplementing of a mobile phone screen, reflection shielding during mobile phone shooting and the like are met, and cannot effectively identify cheating behaviors such as contour deformation, feature blurring and the like caused by the interference; If multiple feature verification is involved, a fixed weight superposition mode is adopted, and the influence of different scenes such as strong light, weak light, indoor and outdoor on the feature effectiveness is not considered, so that the verification precision is greatly deviated in a complex scene, and the self-adaptive accurate judgment cannot be realized; The image validity verification is imperfect, namely only the parameters of the face are concerned, the matching of the background integrity verification and the space-time consistency of the ambient light is neglected, the fake means such as cutting the background, splicing the image and the like can not be identified, and the fake attendance images of the non-attendance time period and the non-attendance area can not be distinguished; The scene suitability is poor, a reference scene feature library of a preset attendance scene is not established, the consistency of an attendance image background and a specified scene cannot be checked, and the cross-scene cheating behavior of submitting attendance to non-specified scene shooting images such as families, outdoor roadsides and the like is difficult to prevent; the repeated image investigation and the real-time verification are insufficient, namely, a perfect historical image comparison mechanism is lacking, the cheating behavior of tampering the time stamp and multiplexing the historical image cannot be effectively identified only by relying on simple time stamp recording, and cooperative logic of similarity comparison and time stamp verification is not established, so that the real-time judgment precision is low; in sum, the existing face recognition attendance anti-cheating technology is difficult to comprehensively resist various hidden cheating behaviors due to the defects of incomplete checking dimension, poor scene suitability, weak anti-interference capability and the like, and cannot meet the high requirements of attendance management on the authenticity and reliability of data. Therefore, developing a high-precision anti-cheating method integrating multidimensional feature analysis and cooperative verification of background scenery and ambient light becomes a technical problem to be solved in the technical field of face recognition attendance at present. Disclosur