Open Rule Induction
- URL: http://arxiv.org/abs/2110.13577v1
- Date: Tue, 26 Oct 2021 11:20:24 GMT
- Title: Open Rule Induction
- Authors: Wanyun Cui, Xingran Chen
- Abstract summary: Language model (LM)-based rule generation are proposed to enhance the expressive power of the rules.
We argue that, while KB-based methods inducted rules by discovering data commonalities, the current LM-based methods are "learning rules from rules"
In this paper, we propose the open rule induction problem, which aims to induce open rules utilizing the knowledge in LMs.
- Score: 2.1248439796866228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rules have a number of desirable properties. It is easy to understand, infer
new knowledge, and communicate with other inference systems. One weakness of
the previous rule induction systems is that they only find rules within a
knowledge base (KB) and therefore cannot generalize to more open and complex
real-world rules. Recently, the language model (LM)-based rule generation are
proposed to enhance the expressive power of the rules. In this paper, we
revisit the differences between KB-based rule induction and LM-based rule
generation. We argue that, while KB-based methods inducted rules by discovering
data commonalities, the current LM-based methods are "learning rules from
rules". This limits these methods to only produce "canned" rules whose patterns
are constrained by the annotated rules, while discarding the rich expressive
power of LMs for free text.
Therefore, in this paper, we propose the open rule induction problem, which
aims to induce open rules utilizing the knowledge in LMs. Besides, we propose
the Orion (\underline{o}pen \underline{r}ule \underline{i}nducti\underline{on})
system to automatically mine open rules from LMs without supervision of
annotated rules. We conducted extensive experiments to verify the quality and
quantity of the inducted open rules. Surprisingly, when applying the open rules
in downstream tasks (i.e. relation extraction), these automatically inducted
rules even outperformed the manually annotated rules.
Related papers
- Learning Rules from KGs Guided by Language Models [48.858741745144044]
Rule learning methods can be applied to predict potentially missing facts.
Ranking of rules is especially challenging over highly incomplete or biased KGs.
With the recent rise of Language Models (LMs) several works have claimed that LMs can be used as alternative means for KG completion.
arXiv Detail & Related papers (2024-09-12T09:27:36Z) - Rule Extrapolation in Language Models: A Study of Compositional Generalization on OOD Prompts [14.76420070558434]
Rule extrapolation describes OOD scenarios, where the prompt violates at least one rule.
We focus on formal languages, which are defined by the intersection of rules.
We lay the first stones of a normative theory of rule extrapolation, inspired by the Solomonoff prior in algorithmic information theory.
arXiv Detail & Related papers (2024-09-09T22:36:35Z) - Symbolic Working Memory Enhances Language Models for Complex Rule Application [87.34281749422756]
Large Language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning.
We propose augmenting LLMs with external working memory and introduce a neurosymbolic framework for rule application.
Our framework iteratively performs symbolic rule grounding and LLM-based rule implementation.
arXiv Detail & Related papers (2024-08-24T19:11:54Z) - Enabling Large Language Models to Learn from Rules [99.16680531261987]
We are inspired that humans can learn the new tasks or knowledge in another way by learning from rules.
We propose rule distillation, which first uses the strong in-context abilities of LLMs to extract the knowledge from the textual rules.
Our experiments show that making LLMs learn from rules by our method is much more efficient than example-based learning in both the sample size and generalization ability.
arXiv Detail & Related papers (2023-11-15T11:42:41Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - Automating Defeasible Reasoning in Law [0.0]
We study defeasible reasoning in rule-based systems, in particular about legal norms and contracts.
We identify rule modifier that specify how rules interact and how they can be overridden.
We then define rule transformations that eliminate these modifier, leading to a translation of rules to formulas.
arXiv Detail & Related papers (2022-05-15T17:14:15Z) - Differentiable Rule Induction with Learned Relational Features [9.193818627108572]
Rule Network (RRN) is a neural architecture that learns predicates that represent a linear relationship among attributes along with the rules that use them.
On benchmark tasks we show that these predicates are simple enough to retain interpretability, yet improve prediction accuracy and provide sets of rules that are more concise compared to state of the art rule induction algorithms.
arXiv Detail & Related papers (2022-01-17T16:46:50Z) - Learning Symbolic Rules for Reasoning in Quasi-Natural Language [74.96601852906328]
We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
arXiv Detail & Related papers (2021-11-23T17:49:00Z) - Building Rule Hierarchies for Efficient Logical Rule Learning from
Knowledge Graphs [20.251630903853016]
We propose new methods for pruning unpromising rules using rule hierarchies.
We show that the application of HPMs is effective in removing unpromising rules.
arXiv Detail & Related papers (2020-06-29T16:33:30Z) - Towards Learning Instantiated Logical Rules from Knowledge Graphs [20.251630903853016]
We present GPFL, a probabilistic learner rule optimized to mine instantiated first-order logic rules from knowledge graphs.
GPFL utilizes a novel two-stage rule generation mechanism that first generalizes extracted paths into templates that are acyclic abstract rules.
We reveal the presence of overfitting rules, their impact on the predictive performance, and the effectiveness of a simple validation method filtering out overfitting rules.
arXiv Detail & Related papers (2020-03-13T00:32:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.