CN-122027358-A - Trust management device and method based on safety fuse
Abstract
The application discloses a trust management device and a trust management method based on a safety fuse, wherein the device comprises the following components: a machine learning trust manager agent, a deterministic trust manager agent, and a guard manager. The protection manager can dynamically and seamlessly switch the trust evaluation task of the system to the deterministic trust manager agent when abnormality occurs or external alarm is received by monitoring the running state of the machine learning trust manager agent in real time so as to solve the problem that the traditional machine learning trust management mechanism lacks the dynamic safety protection capability in running, realize real-time detection and automatic fault isolation of abnormal behaviors of a model and remarkably improve the reliability, safety and quick recovery capability of network trust management.
Inventors
- ZHANG NANXIN
- WU HUAIGU
- ZHANG ZIJIAN
Assignees
- 天府绛溪实验室
Dates
- Publication Date
- 20260512
- Application Date
- 20260410
Claims (10)
- 1. A security fuse-based trust management apparatus disposed in a software defined network controller, comprising: a machine learning trust manager agent for performing a network trust evaluation based on a machine learning algorithm; A deterministic trust manager agent for performing a network trust evaluation based on a non-machine learning algorithm; The protection manager is respectively connected with the machine learning trust manager agent and the deterministic trust manager agent and is used for monitoring the running state of the machine learning trust manager agent to acquire an evaluation index, and dynamically switching a trust evaluation task from the machine learning trust manager agent to the deterministic trust manager agent when the machine learning trust manager agent is determined to be abnormal based on the evaluation index or an external alarm.
- 2. The trust management apparatus of claim 1, wherein the guard manager comprises: the trust controller agent is used for monitoring the running state of the machine learning trust manager agent, collecting evaluation indexes and generating an internal alarm when the abnormality is identified; a selector agent for receiving the internal alert or the external alert and performing an activation or deactivation operation of the machine learning trust manager agent and the deterministic trust manager agent.
- 3. The trust management apparatus of claim 1, wherein evaluating metrics comprises evaluating a detection error rate for trust decision accuracy, evaluating a decision delay for real-time performance, evaluating client fairness for resource allocation fairness, and whether a client is allowed to violate subscribed service level agreement compliance.
- 4. A trust management apparatus as claimed in claim 3, wherein the trust controller agent is further to: Calculating a decision score of the machine learning trust manager agent by weighted summation based on an evaluation index, wherein the evaluation index is: wherein In order to evaluate the index of the present invention, As the weight coefficient of the light-emitting diode, Is the number of the evaluation index(s), Represent the first Evaluation context of the individual evaluation instance; And when the decision score is lower than a preset threshold value, generating an internal alarm.
- 5. The trust management apparatus of claim 2, wherein the selector agent performing an activation or deactivation operation of the machine learning trust manager agent and the deterministic trust manager agent comprises: when the machine learning trust manager agent has abuse or performance faults, the selector agent deactivates the machine learning trust manager agent and activates the deterministic trust manager agent, wherein the performance faults comprise decision delay exceeding a threshold value, algorithm breakdown or running interruption and client fairness deviation exceeding a preset range; When the abnormal state of the machine learning trust manager agent clears, the selector agent re-activates the machine learning trust manager agent and deactivates the deterministic trust manager agent.
- 6. The trust management apparatus of claim 1, further comprising a hardware support architecture comprising: a processor for performing the functions of a machine learning trust manager agent, a deterministic trust manager agent, and a protection manager; The memory is connected with the processor and used for storing machine learning model parameters, a deterministic rule base and evaluation indexes; And the communication interface is connected with the processor and used for carrying out data interaction with the network equipment.
- 7. A trust management apparatus as claimed in claim 6, wherein the memory comprises a first memory partition for storing machine learning model parameters and a second memory partition for storing a deterministic rule base.
- 8. A security fuse-based trust management method, wherein the method is applied to the trust management apparatus of any one of claims 1-7, comprising: monitoring the running state of a machine learning trust manager agent through a protection manager, and acquiring an evaluation index; Judging whether the machine learning trust manager agent is abnormal or not based on the evaluation index or the received external alarm; when it is determined that an exception occurs, the trust evaluation task is dynamically switched from the machine learning trust manager agent to the deterministic trust manager agent by the guard manager.
- 9. A trust management method as described in claim 8, wherein the step of determining whether a machine learning trust manager agent is abnormal comprises: And analyzing the evaluation index, and judging that the abnormality occurs when the evaluation index exceeds the corresponding threshold value.
- 10. A trust management method as defined in claim 8, further comprising: after switching to the deterministic trust manager proxy, continuously monitoring the state of the machine learning trust manager proxy; And switching the trust evaluation task back to the machine learning trust manager agent after confirming that the abnormal state is cleared and the operation is stable.
Description
Trust management device and method based on safety fuse Technical Field The application relates to the technical field of the Internet of things, in particular to a trust management device and method based on a security fuse. Background In recent years, machine learning systems have been widely used in various Internet Protocol (IP) network environments, covering a plurality of important fields such as internet of things (IoT), internet of vehicles, industrial internet, and edge computing. These systems typically rely on trained models to classify, predict or control sensory data, network events or user behavior, thereby enhancing the automation level and intelligence capabilities of the system. However, with the deep application of machine learning in critical tasks, the problem of trust assurance at runtime is increasingly prominent. The existing machine learning trust mechanism mostly adopts a static design thought, lacks the capability of dynamic monitoring and feedback of the running state of the model in an actual deployment environment, and is difficult to identify and respond to abnormal behaviors of the model in the running process in time. The common technical means at present mainly comprise the following steps of firstly attempting to ensure that data input to a model has certain reliability by presetting an input data white list, implementing access control or setting static rules and the like, secondly introducing a robust optimization strategy in a model training stage to enhance the resistance of the model to a countermeasure sample, and thirdly, carrying out scheduling management on the model or carrying out unified collection and analysis on abnormal logs by means of a centralized control platform. However, these methods have significant limitations in that, first, existing solutions have difficulty in achieving real-time detection of anomalies in model runtime. Once the model is degraded due to environmental disturbance, drift of input data distribution or attack such as data poisoning, the system cannot find out in time, and the abnormal model still runs continuously, so that linkage error decision is caused. Secondly, the perception capability of the system context and the network environment is lacking, and the credibility evaluation of the model cannot be dynamically adjusted according to the real-time running state, so that the trust evaluation is disjointed with the specific running scene. Third, when the model is determined to be not authentic, the system generally lacks an effective automatic coping mechanism. For example, the output results of the untrusted model may still be passed on to downstream components, resulting in error propagation, affecting the overall system safety. Fourth, relying on a centralized architecture for trust management easily causes expansibility bottlenecks and single point failure risks, and is difficult to adapt to a distributed and high-dynamic network environment. Finally, the existing mechanism generally cannot construct a complete and retrospective model abnormal behavior evidence chain, so that the difficulty of operation and maintenance investigation is increased, and the system bears greater pressure when facing compliance audit. In summary, the existing trust management method has significant shortcomings in terms of instantaneity, self-adaptability, autonomous response capability and the like, and has difficulty in meeting strict requirements of a modern distributed machine learning system on aspects of runtime safety, model autonomous isolation, rapid fault recovery and the like. Disclosure of Invention The application aims to overcome the defects of the prior art, and provides a trust management device and a trust management method based on a safety fuse, which can solve the problem of machine learning model trust failure caused by lack of a runtime monitoring and quick response mechanism in the prior art by introducing a dynamic collaborative architecture consisting of a machine learning trust manager agent, a deterministic trust manager agent and a protection manager. The aim of the application is achieved by the following technical scheme: in a first aspect, the present application proposes a security fuse based trust management apparatus, said apparatus being provided in a software defined network controller, comprising: a machine learning trust manager agent for performing a network trust evaluation based on a machine learning algorithm; A deterministic trust manager agent for performing a network trust evaluation based on a non-machine learning algorithm; The protection manager is respectively connected with the machine learning trust manager agent and the deterministic trust manager agent and is used for monitoring the running state of the machine learning trust manager agent to acquire an evaluation index, and dynamically switching a trust evaluation task from the machine learning trust manager agent to the deterministic trust manag