Search

CN-122019716-A - Tensor-guided rule learning-based interpretable knowledge graph reasoning method and device

CN122019716ACN 122019716 ACN122019716 ACN 122019716ACN-122019716-A

Abstract

The invention provides an interpretable knowledge graph reasoning method and device based on tensor guiding rule learning, and relates to the technical field of knowledge graph reasoning. The method comprises the steps of obtaining an input knowledge graph and representing the knowledge graph by using triples, extracting chain rules from the knowledge graph, screening activation rules, introducing tensor Tucker operators, carrying out joint modeling on the triples and rule chains, calculating knowledge graph embedding scores, training by adopting a staged joint training strategy, firstly generating chain-level interpretation factors through intra-chain rule information aggregation, then outputting rule scores through inter-chain rule information aggregation based on the chain-level interpretation factors, carrying out weighted fusion on the knowledge graph embedding scores and the rule scores to obtain comprehensive scores, and judging results of knowledge graph reasoning. The invention can aggregate intra-chain and inter-chain rule information through tensor operation to generate traceable interpretation factors, and enhances the interpretation and robustness of the model while improving the reasoning precision.

Inventors

  • LIU JIANWU

Assignees

  • 莆田学院

Dates

Publication Date
20260512
Application Date
20260123

Claims (9)

  1. 1. An interpretable knowledge graph reasoning method based on tensor guided rule learning, which is characterized by comprising the following steps: S1, acquiring an input knowledge graph, and representing by adopting a triplet comprising an entity and a relation; S2, extracting chain rules from the knowledge graph to generate a rule chain set, and screening out activation rules with non-zero support degree through an example verification to obtain an activation rule subset; s3, introducing a tensor Tucker operator, carrying out joint modeling on the triplet and rule chains in the activated rule subset, and calculating a knowledge graph embedding score by fusing embedding representations of entities, relations and rule chains; S4, training by adopting a phased joint training strategy based on the activated rule subset, namely firstly, mining multi-hop semantic dependencies inside a rule chain by using a tensor Tucker operator to generate a chain-level interpretation factor, and then, capturing interaction relations among different rules by adopting a deep neural network through inter-chain rule information aggregation based on the chain-level interpretation factor to output rule scores; And S5, carrying out weighted fusion on the knowledge graph embedding score and the rule score to obtain a comprehensive score, and judging the result of knowledge graph reasoning.
  2. 2. The method for reasoning the interpretable knowledge graph based on tensor guided rule learning according to claim 1, wherein the step S2 is specifically: When the chain rule is extracted from the knowledge graph, all candidate rule chains conforming to the logic rule are induced from the triples through an expected maximization algorithm to form a rule chain set; The rule is characterized in that the logic rule adopts a first-order logic representation, and is in accordance with the structure of a rule body and a rule head in form; the expectation maximization algorithm screens out rules with non-zero support degree by calculating the support degree of each candidate rule chain so as to reserve high-quality logic rules until the rule quality converges to obtain a rule chain set; when screening an activation rule, replacing entity variables of the rule chain set with real entities in a knowledge graph to generate a rule instance; Based on the rule example, checking whether rule body triples of the example exist in the knowledge graph, if yes, the rule chain is an activation rule, and an activation rule subset is obtained.
  3. 3. The method for reasoning the interpretable knowledge graph based on tensor guided rule learning according to claim 2, wherein the step S3 is specifically: Firstly, mapping an entity and a relation to a polar coordinate space based on the triplet, and dividing the entity and the relation into a modal feature and a stage feature, wherein the modal feature is used for representing different-level entities; then, based on the activation rule subset, introducing a tensor Tucker operator, performing interactive fusion on feature embedding of entities and relations and activation rule chain embedding, and performing joint modeling of triples and rule chains; and then respectively calculating the physical relation distance between the two types of features, namely the modal feature and the stage feature, wherein the formula is as follows: ; ; Wherein, the Entity relationship distance for the modality feature; the head entity embedding, the relation embedding and the tail entity embedding of the modal characteristics are respectively carried out; Is Hadamard product; Is the L2 norm; entity relationship distance for the stage feature; head entity embedding, relation embedding and tail entity embedding of the stage characteristics respectively; is L1 norm; and fusing the entity relation distance of the two types of features to construct a triplet comprehensive distance, wherein the formula is as follows: ; Wherein, the The distance is synthesized for the triples; is a learnable weight parameter; and calculating a triplet score based on the triplet comprehensive distance, wherein the formula is as follows: ; Wherein, the The scores are triad scores, namely the knowledge graph embedding scores; Is a triplet boundary parameter used to constrain the score to a reasonable range.
  4. 4. The method for reasoning the interpretable knowledge graph based on tensor guided rule learning according to claim 3, wherein a tensor Tucker operator is introduced to interactively fuse the feature embedding of the entity and the relation with the rule chain embedding activation, and the joint modeling of the triplet and the rule chain is performed, specifically: Embedding an activation rule chain based on the subset of activation rules Embedding relationships in rule bodies with active rule chains Splicing to obtain a characteristic sequence Inputting an RNN to generate a prediction rule head, and constraining the difference between the prediction rule head and an actual rule head through a distance function, wherein the distance function has the formula: ; Wherein, the Is a distance function; is the first An embedded representation of a bar activation rule chain; is the first Rule body in rule chain of bar activation Embedding a personal relationship; is the first Embedding the actual rule head relation of the bar activation rule chain; The method is a cyclic neural network and is used for capturing the sequence dependence of the relation in the rule body; Is the norm of the vector; based on the distance function, calculating a score function of the rule chain to quantify the semantic consistency of the rule chain, wherein the formula is as follows: ; Wherein, the To activate rule chains Higher values represent more reliable logic for the rule chain; The rule boundary parameters are used for restricting the rule chain score range; constructing a joint loss function, and embedding representation of a constraint triplet and a rule chain, wherein the formula is as follows: ; ; Wherein, the Is a joint loss function; Is a triplet set; constructing an embedding score for the triplet loss based on the knowledge graph; for the subset of activation rules; is a balance weight; Is a regular chain loss; activating a function for sigmoid; Is a regular chain negative sample set; Is a regular chain negative sample; And embedding the rule head relation corresponding to the rule chain negative sample.
  5. 5. The method for reasoning the interpretable knowledge graph based on tensor guided rule learning according to claim 3, wherein the multi-hop semantic dependencies inside the rule chain are mined by using a tensor Tucker operator through intra-chain rule information aggregation to generate a chain-level interpretation factor, which is specifically: Extracting an embedded matrix of the triplet, performing forward tensor mode product operation on the embedded matrix and a core tensor operation through an inverse tensor Tucker decomposition framework to generate a single-jump rule interpretation factor, wherein the formula is as follows: ; Wherein, the Is expressed in the activation rule chain The triplet score under the logical constraint of (a) is a single-hop rule interpretation factor; is a tensor fusion function; The method comprises the steps of respectively embedding a head entity embedding matrix, a relation embedding matrix and a tail entity embedding matrix; Is a tensor Tucker operator; Is the core tensor; The low-rank core tensor is adopted to express the parameter tensor, the parameter scale is compressed while the core semantic information is maintained, and a compact single-hop interpretation factor is obtained, wherein the formula is as follows: ; ; Wherein, the Approximate super parameters for low rank; 、 、 Is a low rank decomposition vector; is tensor product; to activate rule chains The rule head relation is embedded; is a compact single-hop interpretation factor; T is a transposed symbol; based on the compact single-hop interpretation factors, slice information of each single-hop relation is extracted, knowledge of different paths of an activation rule chain is aggregated, and the chain-level interpretation factors of the activation rule chain are obtained after Softmax normalization, wherein the formula is shown in the specification; ; Wherein, the Is the first Chain-level interpretation factors of the rule chain are activated by the bar; Is a normalized exponential function; Representing a product operator; is the first Rule chain for bar activation Is a length of (2); is a head entity; is a tail entity; Is a relationship; Represent the first Rule chain for bar activation Triad scoring of all single-hop relationships in (i.e.) In the form of slices.
  6. 6. The method for reasoning the interpretable knowledge graph based on tensor-guided rule learning according to claim 5, wherein the interaction relation among different rules is captured through inter-chain rule information aggregation by using a deep neural network, and a rule score is output, specifically: A breadth-first search strategy is adopted, all feasible paths matched with the rule head relations of all rule chains in the activation rule subset in the knowledge graph are traversed from the head entity, candidate tail entities corresponding to the effective paths are screened out, and a candidate tail entity set is generated; And summarizing the corresponding chain level interpretation factors and embedded features of each candidate tail entity in the candidate tail entity set, and calculating the rule score of the candidate tail entity corresponding to the activation rule chain by using a multi-layer perceptron, wherein the formula is as follows: ; Wherein, the Is the first The rule chain of strip activation corresponds to the candidate tail entity Is a rule score for (1); is a head entity; to activate rule chains The rule head relation is embedded; is a candidate tail entity; Is a multi-layer perceptron; Normalizing the layers; The total number of active rule chains for participating in the matching of the current candidate entity; is the first Chain-level interpretation factors of the rule chain are activated by the bar; Is that Semantic embedded vectors of (a); Is that Semantic embedded vectors of (a); Embedding vectors for global semantics of the rule chain; Representing a feature stitching operation.
  7. 7. The method for reasoning the interpretable knowledge-graph based on tensor-guided rule learning of claim 6, wherein the formula of the composite score is: ; Wherein, the A score for the composite; is a candidate tail entity; Embedding scores for the knowledge graph; Scoring rules; Is a scoring weight.
  8. 8. The method for interpretive knowledge-graph reasoning based on tensor-guided rule learning of claim 7, wherein when the result of the knowledge-graph reasoning is determined based on the composite score: Firstly, converting the comprehensive score into the true probability of the candidate entity through a softmax function, wherein the formula is as follows: ; Wherein, the The true probability of the candidate tail entity; Is a normalized exponential function; then selecting the candidate tail entity with highest probability And as an reasoning result, completing the knowledge graph link prediction reasoning task.
  9. 9. An interpretable knowledge-graph reasoning apparatus based on tensor-guided rule learning to implement an interpretable knowledge-graph reasoning method based on tensor-guided rule learning as claimed in any one of claims 1-8, comprising: the acquisition unit is used for acquiring an input knowledge graph and representing the knowledge graph by adopting a triplet comprising an entity and a relation; the chain rule mining unit is used for extracting chain rules from the knowledge graph to generate a rule chain set, and screening out the activation rules with non-zero support degree through an example verification to obtain an activation rule subset; the joint modeling unit is used for introducing a tensor Tucker operator, performing joint modeling on the triplet and rule chains in the activated rule subset, and calculating a knowledge graph embedding score by fusing embedding representations of entities, relations and rule chains; The system comprises a rule chain, a phased joint training unit, a rule score generation unit, a rule analysis unit, a rule analysis unit and a rule analysis unit, wherein the phased joint training unit is used for training by adopting a phased joint training strategy based on the activated rule subset, namely firstly, mining multi-hop semantic dependence inside a rule chain by utilizing a tensor Tucker operator to generate a chain-level interpretation factor; And the reasoning unit is used for carrying out weighted fusion on the knowledge graph embedding score and the rule score to obtain a comprehensive score and is used for judging the result of knowledge graph reasoning.

Description

Tensor-guided rule learning-based interpretable knowledge graph reasoning method and device Technical Field The invention relates to the technical field of knowledge graph reasoning and artificial intelligence, in particular to an interpretable knowledge graph reasoning method and device based on tensor guiding rule learning. Background The knowledge graph is used as a core technology of structured knowledge representation, and by virtue of the strong multi-source heterogeneous data integration capability and intelligent reasoning potential, the knowledge graph is widely applied to a plurality of fields such as finance, electronic commerce, medical health, government affairs and the like and becomes a key support for promoting the development of an information physical society intelligent system. Typical knowledge graphs such as google knowledge graph, wiki data, hundred degrees encyclopedia and the like organize massive knowledge in the form of entity-relation triples, and provide semantic basis for intelligent decision in complex scenes. However, in the actual construction and application process, the knowledge graph has the problems of data sparseness and information deficiency, and the integrity and reliability of reasoning are severely restricted. To address this challenge, three types of knowledge reasoning methods, logical rule-based reasoning, distributed representation-based embedded reasoning, and deep reasoning models fusing neural networks, have been developed. The logic rule method realizes interpretable multi-hop reasoning through induction of first-order predicate logic rules (such as path rules), but generalization capability is limited by rule coverage, while the embedding method (such as TransE, rotatE, tuckER and the like) can effectively model semantic space of entities and relations, but lacks transparency of a reasoning process due to the adoption of black box vector operation, and the recently raised neuro-symbol mixing method (such as RNNLogic, rulE) tries to combine interpretability and embedded expression capability of rules, but has defects in deep fusion of rules and triples, interactive modeling among rules and global interpretation factor construction. Especially in a multi-hop reasoning scene, the existing method is difficult to effectively describe the structural dependence inside the rule and the semantic association between the rules, so that the reasoning result is not accurate enough and clear causal interpretation is not available. In addition, most models are highly sensitive to the number of rules, the performance is obviously reduced when the rules are sparse, and the robustness is insufficient. In view of this, the present application has been made. Disclosure of Invention The invention aims to provide an interpretable knowledge graph reasoning method and device based on tensor guided rule learning, which are used for solving the problems of poor interpretability, strong dependence on logic rules, opaque reasoning process and the like in the existing knowledge graph reasoning technology. In order to solve the technical problems, the invention is realized by the following technical scheme: an interpretable knowledge graph reasoning method based on tensor guided rule learning, comprising: S1, acquiring an input knowledge graph, and representing by adopting a triplet comprising an entity and a relation; S2, extracting chain rules from the knowledge graph to generate a rule chain set, and screening out activation rules with non-zero support degree through an example verification to obtain an activation rule subset; s3, introducing a tensor Tucker operator, carrying out joint modeling on the triplet and rule chains in the activated rule subset, and calculating a knowledge graph embedding score by fusing embedding representations of entities, relations and rule chains; S4, training by adopting a phased joint training strategy based on the activated rule subset, namely firstly, mining multi-hop semantic dependencies inside a rule chain by using a tensor Tucker operator to generate a chain-level interpretation factor, and then, capturing interaction relations among different rules by adopting a deep neural network through inter-chain rule information aggregation based on the chain-level interpretation factor to output rule scores; And S5, carrying out weighted fusion on the knowledge graph embedding score and the rule score to obtain a comprehensive score, and judging the result of knowledge graph reasoning. Preferably, the S2 specifically is: When the chain rule is extracted from the knowledge graph, all candidate rule chains conforming to the logic rule are induced from the triples through an expected maximization algorithm to form a rule chain set; The rule is characterized in that the logic rule adopts a first-order logic representation, and is in accordance with the structure of a rule body and a rule head in form; the expectation maximization algorithm screens out rule