CN-121997102-A - False information detection method, device and storage medium based on multi-agent visual angle aggregation
Abstract
The invention relates to the technical field of information content security and intelligent detection, and discloses a false information detection method, a false information detection device and a storage medium based on multi-agent visual angle aggregation, wherein the method comprises the steps of obtaining more than one network information instance and constructing a multi-visual angle feature set for each network information instance; the method comprises the steps of constructing a multi-agent comprising a plurality of audit agents, a plurality of coordination agents and a decision agent, wherein the audit agents are respectively used for analyzing the multi-view feature set according to preset view angles to obtain audit decisions of the preset view angles, the coordination agents are respectively used for performing view angle perception aggregation on the audit decisions according to the preset view angles to generate a plurality of intermediate coordination decisions, and the decision agent is used for generating a decision result according to the intermediate coordination decisions.
Inventors
- WANG ZONGWEI
- JIANG FENG
- GAO MIN
- BAI YIBING
- FU CUN
- ZHOU WEI
- WEN JUNHAO
Assignees
- 重庆大学
Dates
- Publication Date
- 20260508
- Application Date
- 20260202
Claims (10)
- 1. The false information detection method based on multi-agent visual angle aggregation is characterized by comprising the following steps of: S1, acquiring more than one network information instance and constructing a multi-view feature set for each network information instance; S2, constructing a multi-agent comprising a plurality of audit agents, a plurality of coordination agents and a decision agent, wherein the audit agents are respectively used for analyzing the multi-view feature set according to preset view angles to obtain audit judgment of the preset view angles, the coordination agents are respectively used for performing view angle perception aggregation on the audit judgment according to the preset view angles to generate a plurality of intermediate coordination judgment, and the decision agent is used for generating a judgment result according to the intermediate coordination judgment; S3, training multiple agents according to the multi-view feature set of more than one network information instance; S4, judging the information instance to be tested through the trained multi-agent to obtain a judging result.
- 2. The false information detection method according to claim 1, wherein the step S1 further comprises decomposing multi-view feature sets of each network information instance according to a preset view to obtain feature subsets, wherein complementary thread dimensions are covered among the feature subsets, and the audit agent generates audit judgment of the preset view according to the feature subsets.
- 3. The false information detection method according to claim 1, further used for initializing multiple agents in step S2: An audit role portrait, audit experience memory and audit action memory are set for each audit agent, wherein the audit role portrait is used for restricting a preset visual angle of input and audit judgment output of the audit agent, the audit experience memory is used for storing audit agent identification rules, and the audit action memory is used for recording historical audit judgment of the audit agent; Setting a coordination role portrait, a coordination confidence memory and a coordination action memory for each coordination agent, wherein the coordination role portrait is used for storing role attributes and behavior characteristics of the coordination agent, the coordination confidence memory is used for storing experience confidence information of the coordination agent, and the coordination action memory is used for recording historical intermediate coordination judgment of the coordination agent; Setting a decision role portrait, decision experience memory, decision confidence memory and decision action memory for the decision-making intelligent agent, wherein the decision role portrait is used for storing role attributes and decision characteristics of the decision-making intelligent agent, the decision experience memory is used for storing experience memory information of the decision-making intelligent agent, the decision confidence memory is used for storing confidence memory information of the decision-making intelligent agent, and the decision action memory is used for storing historical judging results of the decision-making intelligent agent.
- 4. A false information detection method according to claim 3, further used for adaptively optimizing a multi-agent in step S3: a1, calculating the comprehensive score of each auditing agent, and cutting the auditing agent according to the comprehensive score; a2, updating the audit agent, the coordination agent or the decision agent for judging the error.
- 5. The false information detection method according to claim 4, wherein in the step A1, a composite score of each audit agent is calculated, and if the composite score of the audit agent is smaller than a score threshold, the audit agent is removed, and the calculation formula of the composite score of the audit agent is: in the formula, Is the first The comprehensive score of each audit agent, For the accuracy of the current multi-agent architecture, To coordinate the subordinate audit agent sets of agents, In order to penalize the coefficients, Is the first Individual agent predictive vectors And (d) Individual agent predictive vectors Is used for the cosine similarity of the (c), To coordinate the removal of agent subordinate The collection of agents of the individual auditing agents, To coordinate the number of subordinate audit agents.
- 6. The false information detection method of claim 4, wherein in step A2, the calculation formula for auditing the empirical memory update is: in the formula, In order to update the audit experience memory, For the multi-view feature to be currently detected, For the assignment operation, For the purpose of auditing a character portrait, In order to update the pre-audit experience memory, In order to audit the action memory, To audit the audit decisions made by the agent in the current inference round, For inferential process information and decision basis abstracts related to audit decisions, In order to ultimately monitor the tag(s), Updating operators for experience self-thinking; The calculation formula of decision experience memory update is as follows: in the formula, In order to update the decision experience memory, For the representation of a decision-making character, In order to update the decision experience memory before the update, In order to make a decision on the action memory, To decide on the decision results that the agent forms in the current inference round, The method comprises the steps of summarizing reasoning process information and decision basis related to a judging result; the coordinated confidence memory update calculation is: in the formula, In order to coordinate the confidence memory after the update, To update the operator for confidence in the reflexive idea, In order to reconcile the confidence memory prior to updating, In order to coordinate the action memory of the user, In order to coordinate the representation of a character, To coordinate the intermediate agreement decisions made by the agent in the current reasoning round, Reasoning process information and decision basis abstracts related to intermediate agreement decisions; The decision confidence memory update calculation is: in the formula, In order to update the decision confidence memory, To update the operator for confidence in the reflexive idea, In order to update the decision confidence memory before the update, To coordinate action memory.
- 7. The false information detection method according to claim 1, wherein in step S4, confidence-guided routing is performed when determining the information instance to be detected, and the decision agent selects the coordinating agent according to the confidence weight for evaluation: When more than one audit agent and more than one coordination agent guiding route activation according to the confidence coefficient form a consistent conclusion, and the margin of the consistent conclusion reaches a margin threshold value, outputting a judging result and terminating reasoning; Otherwise, the rest audit agents and the coordination agents are activated step by step according to the order of the confidence level from high to low, the current judgment and margin are recalculated until the margin reaches the margin threshold or the audit agents and the coordination agents reach the stop condition of the preset maximum activation quantity, and then the judgment result is output.
- 8. The false information detection method of claim 7, wherein the confidence-directed routing is performed by the following formula: in the formula, In order to determine the result of the determination, For the reason of determination in relation to the determination result, Is an intelligent body Is used for detecting the detection result of the (a), Is an intelligent body For the reason of the detection of (a), For a set of agents that have been currently activated, To consider the inference operator of the confidence weights, For the representation of a decision-making character, In order to make a decision about the memory of experience, In order to make a decision to be confident in the memory, To coordinate the confidence weights in the confidence memories, Is a margin threshold.
- 9. A false information detection device based on multi-agent view angle aggregation, comprising a memory, a processor and a computer program stored on the memory, characterized in that the processor executes the computer program to implement the steps of the false information detection method based on multi-agent view angle aggregation as claimed in any one of claims 1-8.
- 10. A computer readable storage medium containing a computer program, on which the computer program is stored, characterized in that the false information detection method based on multi-agent angle of view aggregation according to any one of claims 1-8 is implemented when the computer program is executed by one or more processors.
Description
False information detection method, device and storage medium based on multi-agent visual angle aggregation Technical Field The invention relates to the technical field of information content security and intelligent detection, in particular to a false information detection method, device and storage medium based on multi-agent visual angle aggregation. Background With the rapid development of social media and online content platforms, user generated content has the characteristics of huge scale, rapid propagation, various forms and the like. At the same time, false information is also generated and spread in the platform in various forms, such as the manipulation behavior of abnormal or false account numbers, misleading merchandise reviews, and forged or tampered news content. The false information not only can mislead public cognition, but also can destroy the ecology and trust basis of the platform content, and can cause obvious negative influence in the scenes of e-commerce transaction, public event, public opinion propagation and the like. In order to inhibit false information transmission, the early stage of the platform mainly relies on manual auditing for screening and disposal. However, as the scale of content increases exponentially, it has been difficult to meet both real-time and coverage requirements simply by manual auditing. Therefore, the academia and the industry put forward a plurality of automatic detection methods based on statistical learning and deep learning to identify false information so as to improve auditing efficiency. However, the existing deep learning model often has the problems of opaque decision process, insufficient interpretability and the like, so that a large number of samples still need to be manually checked in a high-risk scene, the auditing burden is increased, and the risks of harmful content retention and missed detection are brought. In recent years, a large language model is gradually introduced into a content auditing and false information detection flow by virtue of stronger context understanding and reasoning capability so as to output a more explanatory judgment basis. Based on the above, the multi-agent system driven by the large language model is considered to be capable of improving complex task performance through collaborative reasoning and collective intelligence, and the multi-agent scheme can formally gather multiple views and provide a new technical path for false information detection. The existing false information detection method mainly comprises the steps of performing two-class classification by utilizing content characteristics such as text semantics, emotion tendencies, writing styles and the like based on a content detection model, training by depending on a large amount of labeling data, and limiting the reservation, amplification and interpretation capabilities of fine-grained abnormal clues. The detection model based on social background/interactive structure relies on structural signals such as comments, forwarding, user relations, propagation networks and the like, can improve partial scene performance, introduces additional data acquisition and modeling cost, and has unstable effect under the condition of interaction information deficiency or cold start. The single or multi-agent detection model based on the large language model has the advantages of reasoning and interpretation, but is easily submerged in false information detection, is subjected to viewpoint homogenization and high-cost cooperation limitation, and still lacks a systematic scheme capable of simultaneously considering abnormal clue amplification, viewpoint diversity preservation, structured aggregation and self-adaptive efficiency optimization. More specifically, technical problems that exist when existing large language model multi-agents are used for false information detection include: (1) Information flooding problems-false information cues are typically sparse, fragmented and weak, while real information patterns are more abundant and dominant. When the existing multi-agent adopts the design that each agent completely accesses the full input context to pursue full analysis, the agents are more easily pulled by the dominant normal signals, and the judgment of going to be 'excessively benign' is output, so that fine but key abnormal evidence is ignored. Still further, the exchange of information between agents may amplify judgment bias related to each other, making few but high-risk anomalous evidence overwhelmed in the interaction, ultimately resulting in more deeply masked false information. (2) Multiple agent structure lacking 'view diversity-structured aggregation' that false information has multifaceted and strong context dependence, single view is difficult to stably capture its performance. The existing multi-agent topology focuses on a communication structure (such as full connection, layering or time sequence mutual evaluation), but the mechanism of 'diff