On the Aggregation of Rules for Knowledge Graph Completion
- URL: http://arxiv.org/abs/2309.00306v1
- Date: Fri, 1 Sep 2023 07:32:11 GMT
- Title: On the Aggregation of Rules for Knowledge Graph Completion
- Authors: Patrick Betz, Stefan L\"udtke, Christian Meilicke, Heiner
Stuckenschmidt
- Abstract summary: Rule learning approaches for knowledge graph completion are efficient, interpretable and competitive to purely neural models.
We show that existing aggregation approaches can be expressed as marginal inference operations over the predicting rules.
We propose an efficient and overlooked baseline which combines the previous strategies and is competitive to computationally more expensive approaches.
- Score: 9.628032156001069
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Rule learning approaches for knowledge graph completion are efficient,
interpretable and competitive to purely neural models. The rule aggregation
problem is concerned with finding one plausibility score for a candidate fact
which was simultaneously predicted by multiple rules. Although the problem is
ubiquitous, as data-driven rule learning can result in noisy and large
rulesets, it is underrepresented in the literature and its theoretical
foundations have not been studied before in this context. In this work, we
demonstrate that existing aggregation approaches can be expressed as marginal
inference operations over the predicting rules. In particular, we show that the
common Max-aggregation strategy, which scores candidates based on the rule with
the highest confidence, has a probabilistic interpretation. Finally, we propose
an efficient and overlooked baseline which combines the previous strategies and
is competitive to computationally more expensive approaches.
Related papers
- Bounds on the Generalization Error in Active Learning [0.0]
We establish empirical risk principles for active learning by deriving a family of upper bounds on the generalization error.
We systematically link diverse active learning scenarios, characterized by their loss functions and hypothesis classes to their corresponding upper bounds.
Our results show that regularization techniques used to constraint the complexity of various hypothesis classes are sufficient conditions to ensure the validity of the bounds.
arXiv Detail & Related papers (2024-09-10T08:08:09Z) - Probabilistic Truly Unordered Rule Sets [4.169915659794567]
We propose TURS, for Truly Unordered Rule Sets.
We exploit the probabilistic properties of our rule sets, with the intuition of only allowing rules to overlap if they have similar probabilistic outputs.
We benchmark against a wide range of rule-based methods and demonstrate that our method learns rule sets that have lower model complexity and highly competitive predictive performance.
arXiv Detail & Related papers (2024-01-18T12:03:19Z) - A Voting Approach for Explainable Classification with Rule Learning [0.0]
We introduce a voting approach combining both worlds, aiming to achieve comparable results as (unexplainable) state-of-the-art methods.
We prove that our approach not only clearly outperforms ordinary rule learning methods, but also yields results on a par with state-of-the-art outcomes.
arXiv Detail & Related papers (2023-11-13T13:22:21Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Machine Learning with Probabilistic Law Discovery: A Concise
Introduction [77.34726150561087]
Probabilistic Law Discovery (PLD) is a logic based Machine Learning method, which implements a variant of probabilistic rule learning.
PLD is close to Decision Tree/Random Forest methods, but it differs significantly in how relevant rules are defined.
This paper outlines the main principles of PLD, highlight its benefits and limitations and provide some application guidelines.
arXiv Detail & Related papers (2022-12-22T17:40:13Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - Efficient Learning of Interpretable Classification Rules [34.27987659227838]
This paper contributes an interpretable learning framework IMLI, that is based on maximum satisfiability (MaxSAT) for classification rules expressible in proposition logic.
In our experiments, IMLI achieves the best balance among prediction accuracy, interpretability, and scalability.
arXiv Detail & Related papers (2022-05-14T00:36:38Z) - Theoretical Rule-based Knowledge Graph Reasoning by Connectivity
Dependency Discovery [2.945948598480997]
We present a theory for rule-based knowledge graph reasoning, based on which the connectivity dependencies in the graph are captured via multiple rule types.
Results show that our RuleDict model not only provides precise rules to interpret new triples, but also achieves state-of-the-art performances on one benchmark knowledge graph completion task.
arXiv Detail & Related papers (2020-11-12T03:00:20Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.