Learning Symbolic Rules for Reasoning in Quasi-Natural Language
- URL: http://arxiv.org/abs/2111.12038v1
- Date: Tue, 23 Nov 2021 17:49:00 GMT
- Title: Learning Symbolic Rules for Reasoning in Quasi-Natural Language
- Authors: Kaiyu Yang and Jia Deng
- Abstract summary: We build a rule-based system that can reason with natural language input but without the manual construction of rules.
We propose MetaQNL, a "Quasi-Natural" language that can express both formal logic and natural language sentences.
Our approach achieves state-of-the-art accuracy on multiple reasoning benchmarks.
- Score: 74.96601852906328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Symbolic reasoning, rule-based symbol manipulation, is a hallmark of human
intelligence. However, rule-based systems have had limited success competing
with learning-based systems outside formalized domains such as automated
theorem proving. We hypothesize that this is due to the manual construction of
rules in past attempts. In this work, we ask how we can build a rule-based
system that can reason with natural language input but without the manual
construction of rules. We propose MetaQNL, a "Quasi-Natural" language that can
express both formal logic and natural language sentences, and MetaInduce, a
learning algorithm that induces MetaQNL rules from training data consisting of
questions and answers, with or without intermediate reasoning steps. Our
approach achieves state-of-the-art accuracy on multiple reasoning benchmarks;
it learns compact models with much less data and produces not only answers but
also checkable proofs. Further, experiments on a real-world morphological
analysis benchmark show that it is possible for our method to handle noise and
ambiguity. Code will be released at https://github.com/princeton-vl/MetaQNL.
Related papers
- Rule Learning as Machine Translation using the Atomic Knowledge Bank [8.9969167872226]
We explore the capability of transformers to translate sentences expressing rules in natural language into logical rules.
We perform experiments using the DKET dataset from the literature and create a dataset for language to logic translation based on the Atomic knowledge bank.
arXiv Detail & Related papers (2023-11-05T20:48:54Z) - Learning Reliable Logical Rules with SATNet [7.951021955925275]
We build on SATNet, a differentiable MaxSAT solver that learns the underlying rules from input-output examples.
We introduce several effective verification techniques to validate it against the ground truth rules.
Experiments on stream transformations and Sudoku problems show that our decoded rules are highly reliable.
arXiv Detail & Related papers (2023-10-03T15:14:28Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - Meta-Reasoning: Semantics-Symbol Deconstruction for Large Language Models [34.22393697176282]
We propose the Meta-Reasoning to broaden symbolic methods' applicability and adaptability in the real world.
This method empowers LLMs to deconstruct reasoning-independent semantic information into generic symbolic representations.
We conduct extensive experiments on more than ten datasets encompassing conventional reasoning tasks like arithmetic, symbolic, and logical reasoning, and the more complex interactive reasoning tasks like theory-of-mind reasoning.
arXiv Detail & Related papers (2023-06-30T17:38:10Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - Provable Limitations of Acquiring Meaning from Ungrounded Form: What
will Future Language Models Understand? [87.20342701232869]
We investigate the abilities of ungrounded systems to acquire meaning.
We study whether assertions enable a system to emulate representations preserving semantic relations like equivalence.
We find that assertions enable semantic emulation if all expressions in the language are referentially transparent.
However, if the language uses non-transparent patterns like variable binding, we show that emulation can become an uncomputable problem.
arXiv Detail & Related papers (2021-04-22T01:00:17Z) - RNNLogic: Learning Logic Rules for Reasoning on Knowledge Graphs [91.71504177786792]
This paper studies learning logic rules for reasoning on knowledge graphs.
Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks.
Existing methods either suffer from the problem of searching in a large search space or ineffective optimization due to sparse rewards.
arXiv Detail & Related papers (2020-10-08T14:47:02Z) - Learning Compositional Rules via Neural Program Synthesis [67.62112086708859]
We present a neuro-symbolic model which learns entire rule systems from a small set of examples.
Instead of directly predicting outputs from inputs, we train our model to induce the explicit system of rules governing a set of previously seen examples.
arXiv Detail & Related papers (2020-03-12T01:06:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.