Search

CN-122027852-A - Intelligent blocking method for video screen capturing behavior based on AI multi-mode characteristics

CN122027852ACN 122027852 ACN122027852 ACN 122027852ACN-122027852-A

Abstract

The application provides an intelligent blocking method for video screen capturing behavior based on AI multi-mode features, which comprises the steps of filtering noise signals through abnormal access data captured by an interface of a bottom layer of a monitoring system to obtain a preliminary screen capturing behavior feature set by adopting a preset threshold value, classifying the features according to the obtained screen capturing behavior feature set, determining the type of a current capturing path by using a decision tree algorithm, including frame buffer reading or decoding output interception, obtaining a path classification result, acquiring a usable segment list of a corresponding path in a pre-established intervention tool library according to a matched tool mode identifier, determining a collision-free tool combination sequence in the list, and dynamically superposing intervention tools to a video decoding rendering process according to the obtained stable intervention strength level to achieve a blocking effect on the capturing path.

Inventors

  • WANG HONGJIAN

Assignees

  • 金华市灵匠信息科技有限公司

Dates

Publication Date
20260512
Application Date
20260321

Claims (9)

  1. 1. An intelligent blocking method for video screen capturing behavior based on AI multi-mode characteristics is characterized by comprising the following steps: capturing abnormal access data through a monitoring system bottom layer interface, and filtering noise signals by adopting a preset threshold value to obtain a preliminary screen capturing behavior feature set; performing secondary cleaning on noise signals in the primary screen capturing behavior feature set, and performing smoothing treatment on the data by adopting a statistical method to obtain denoised access data records; if abnormal fluctuation exists in the denoised access data record, carrying out segmentation processing on the data by a time sequence analysis method, and determining a potential screen capturing behavior time window; aiming at the potential screen capturing behavior time window, corresponding behavior feature data are obtained, and a support vector machine algorithm is adopted to classify the feature data, so as to judge whether the feature data belong to real screen capturing behaviors; if the classification result is judged to be the real screen capturing behavior, extracting a key behavior mode from the feature data to obtain a specific screen capturing behavior feature set; According to the specific screen capturing behavior feature set, combining historical data records to perform comparison and analysis, determining sources and modes of abnormal behaviors, and generating behavior detection results; Grouping the abnormal behaviors by adopting a clustering method according to the behavior detection result to obtain an abnormal behavior classification set; according to the abnormal behavior classification set, a decision tree algorithm is applied to classify the specific screen capturing behavior feature set, the current capturing path type is determined, and a path classification result is obtained through frame buffer reading or decoding output interception; And if the path classification result indicates a frame buffer reading type, extracting relevant access right data from the path classification result, grouping the relevant access right data through a clustering algorithm, judging whether the grouped data match a known screen capturing tool mode, and obtaining a matched tool mode identifier.
  2. 2. The method as recited in claim 1, further comprising: Acquiring an available segment list of a corresponding path in a pre-established intervention means library according to the matched tool mode identifier, and determining a conflict-free means combination sequence in the list; calculating compatibility scores of the determined means combination sequences with the path classification results for each means in the sequences, and if the compatibility scores are higher than a preset threshold, adjusting the execution sequence of the means in the sequences to obtain optimized sequence sequences; adopting the obtained optimized sequence to inject corresponding intervention, including noise addition or address rearrangement, into the current capturing path, judging the stability of the system rendering pipeline after injection, and obtaining a stable intervention intensity level; and dynamically superposing the intervention means into the video decoding rendering flow according to the obtained stable intervention intensity level, so as to realize the blocking effect on the capturing path.
  3. 3. The method of claim 2, wherein applying a decision tree algorithm to classify the particular set of screen capture behavior features, determining a current capture path type, comprises: Performing preliminary extraction on key information in the specific screen capturing behavior feature set to obtain behavior pattern data; classifying the specific screen capturing behavior feature set by adopting a decision tree algorithm according to the behavior mode data, judging whether the path type belongs to frame buffer reading or not, and obtaining a first classification result; If the first classification result points to frame buffer reading, data comparison is carried out according to a reading mode, matching information in a preset reading mode library is obtained, and the path type is determined to be attributed to frame buffer reading; If the first classification result does not point to frame buffer reading, performing secondary analysis on the data, judging whether the path type belongs to decoding output interception, and obtaining a second classification result; according to the second classification result, extracting characteristics of the decoding output interception method, acquiring behavior detail data in the interception process, and determining classification labels of the interception method; and comprehensively comparing the path types of the screen capturing behaviors through the combination of the classification labels and the behavior analysis to obtain a path classification result.
  4. 4. The method according to claim 2, wherein if the path classification result indicates a frame buffer read type, extracting relevant access right data from the path classification result, grouping the relevant access right data by a clustering algorithm, and determining whether the grouped data matches a known screenshot tool mode, includes: acquiring a record which is indicated as a frame buffer reading type in the path classification result; Extracting corresponding access right data from the path classification result; grouping processing is carried out on the access right data by adopting a clustering algorithm, so as to obtain a plurality of grouping sets; Acquiring all known mode features from a pre-established screen capturing tool mode library; Comparing each grouping set with the known mode features one by one, and if the structure of the grouping set is consistent with a certain known mode feature, determining that the matching is successful; when the matching is successful, extracting a tool mode identifier of the known mode; And taking the tool mode identification as an output result.
  5. 5. The method according to claim 2, wherein the obtaining a list of available segments of the corresponding paths in the pre-established intervention means library according to the matched tool mode identifier, and determining a conflict-free means combination sequence in the list, comprises: The tool mode identification is matched with a pre-established intervention means library, and an available segment list under a corresponding path is obtained; judging whether conflict exists between any two means according to the execution condition and the limiting condition of each means in the available means list; aiming at a measure set marked as collision-free, arranging measures by adopting a priority ordering rule to obtain a preliminary measure sequence; judging whether the connection is complete or not according to the connection attribute of the adjacent means in the preliminary means sequence; And repeatedly executing the connection judgment until all adjacent pairs in the preliminary means sequence meet the connection condition, thereby obtaining a conflict-free means combination sequence.
  6. 6. The method according to claim 2, wherein the combining the sequences by the determined means, calculating, for each means in the sequence, a compatibility score with the path classification result, and if the compatibility score is higher than a preset threshold, adjusting an execution order of the means in the sequence, includes: According to the initial position of each means in the means combination sequence, a path classification method is adopted to calculate the score of the relevance of each means and the path classification result, so as to obtain a compatibility score; if the compatibility score is higher than a preset threshold, adjusting the execution sequence of the means to obtain a temporarily adjusted sequential logic; Performing traversal analysis on the temporarily adjusted sequential logic to obtain candidate schemes of an optimized sequence; and aiming at the candidate schemes of the optimized sequence, comparing the compatibility scores of each scheme through a score evaluation mechanism to obtain the optimized sequence.
  7. 7. The method according to claim 2, wherein said employing said obtained optimized sequential sequence to inject a corresponding intervention for a current capture path, including noise addition or address reordering, to determine stability of a post-injection system rendering pipeline, comprises: according to the optimized sequence, performing noise adding operation on core nodes in the acquisition path to obtain path data with noise intervention; Evaluating the influence of noise addition on a rendering pipeline by combining the rendering performance data of the system to obtain operation parameters; if the operation parameters exceed the preset range, dynamically adjusting the intervention intensity to obtain an adjusted intervention scheme; Aiming at the adjusted intervention scheme, analyzing the influence of address sequence variation on a rendering pipeline to obtain stability data; and if the stability data does not reach the preset standard, performing secondary optimization processing on the address sequence to obtain a path intervention combination.
  8. 8. The method according to claim 2, wherein dynamically superimposing interventions into a video decoding rendering flow according to the obtained stable intervention intensity level comprises: Determining configuration parameters of intervention means according to the stable intervention intensity level for a data stream in a video decoding process; Adapting the intervention means with a video decoding module to obtain an intervention strategy; Overlapping the intervention means into the rendering flow through the intervention strategy, and monitoring the key nodes of the capturing path in real time to obtain an intervention effect; And if the intervention effect does not reach the preset threshold, performing secondary intervention on the key nodes of the acquisition path to obtain a supplementary blocking result.
  9. 9. The method of claim 2, wherein the recording the classified data using a storage mechanism for the generated path classification result comprises: Storing the path classification result and corresponding behavior pattern data in an associated mode; generating structured classification archive data; And carrying out persistence preservation on the structured classified archival data.

Description

Intelligent blocking method for video screen capturing behavior based on AI multi-mode characteristics Technical Field The invention relates to the technical field of information, in particular to an intelligent blocking method for video screen capturing behavior based on AI multi-mode characteristics. Background In the field of digital content protection, it is important to ensure the security of a video rendering process, especially when faced with increasingly complex screen capturing actions, protecting video data from being illegally acquired becomes a core appeal of content creators and platforms. Research in this area is not only concerned with user privacy and copyright protection, but also directly affects the sustainable development of the digital media industry. However, the current technical means often have difficulty in coping with the rapidly changing threat environment, and a more adaptive protection mechanism is needed to fill the gap. Existing methods often suffer from lack of flexible response capabilities in the face of dynamic countermeasures of the screen capture tool. Many schemes rely too much on preset rules or fixed protection strategies at design time to adjust in real time to the strength and characteristics of the actual threat. This limitation makes safeguards, when faced with new or varied gripping tools, often fail to respond effectively in time, resulting in the video content being captured in its entirety at a critical moment. A further technical difficulty is how to achieve dynamic balancing of the intervention forces during the protection process. Too low an intervention force may not effectively block the grabbing behavior, while too high a force may affect the viewing experience of a normal user, and even cause excessive consumption of system resources. More complicated, the adjustment of the intervention force needs to be closely related to the adaptation speed of the threat behavior, and if the response of an adversary cannot be perceived in time and the strategy is adjusted, the dilemma of protection failure may be involved. For example, if the system detects that a suspicious process tries to access the video memory data during the video playing process, the intervention measures only stay in a simple buffer refreshing, but the protection level cannot be improved according to the frequency and mode of continuous try of the opposite side, which may finally lead to successful extraction of the complete video frame. Therefore, how to dynamically adjust the intervention force according to the real-time variation of the threat after the screen capturing behavior is detected and find the best balance point between the resource consumption and the protection effect becomes the key problem to be solved currently. The solution of this problem not only requires technical innovation, but also verifies its feasibility and stability in the actual business scenario. Disclosure of Invention The invention provides an intelligent blocking method for video screen capturing behavior based on AI multi-mode characteristics, or a screen capturing behavior detection and intervention method based on abnormal access data, which mainly comprises the following steps: capturing abnormal access data through a monitoring system bottom layer interface, and filtering noise signals by adopting a preset threshold value to obtain a preliminary screen capturing behavior feature set; performing secondary cleaning on noise signals in the primary screen capturing behavior feature set, and performing smoothing treatment on the data by adopting a statistical method to obtain denoised access data records; if abnormal fluctuation exists in the denoised access data record, carrying out segmentation processing on the data by a time sequence analysis method, and determining a potential screen capturing behavior time window; aiming at the potential screen capturing behavior time window, corresponding behavior feature data are obtained, and a support vector machine algorithm is adopted to classify the feature data, so as to judge whether the feature data belong to real screen capturing behaviors; if the classification result is judged to be the real screen capturing behavior, extracting a key behavior mode from the feature data to obtain a specific screen capturing behavior feature set; According to the specific screen capturing behavior feature set, combining historical data records to perform comparison and analysis, determining sources and modes of abnormal behaviors, and generating behavior detection results; Grouping the abnormal behaviors by adopting a clustering method according to the behavior detection result to obtain an abnormal behavior classification set; according to the abnormal behavior classification set, a decision tree algorithm is applied to classify the specific screen capturing behavior feature set, the current capturing path type is determined, and a path classification result is obtained t