Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic
- URL: http://arxiv.org/abs/2308.07336v3
- Date: Tue, 14 Nov 2023 03:14:49 GMT
- Title: Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic
- Authors: Terufumi Morishita, Gaku Morio, Atsuki Yamaguchi, Yasuhiro Sogawa
- Abstract summary: We study a synthetic corpus based approach for language models (LMs) to acquire logical deductive reasoning ability.
We adopt a well-grounded set of deduction rules based on formal logic theory, which can derive any other deduction rules when combined in a multistep way.
We empirically verify that LMs trained on FLD corpora acquire more generalizable reasoning ability.
- Score: 14.503982715625902
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study a synthetic corpus based approach for language models (LMs) to
acquire logical deductive reasoning ability. The previous studies generated
deduction examples using specific sets of deduction rules. However, these rules
were limited or otherwise arbitrary, limiting the generalizability of acquired
reasoning ability. We rethink this and adopt a well-grounded set of deduction
rules based on formal logic theory, which can derive any other deduction rules
when combined in a multistep way. Then, using the proposed corpora, which we
name FLD (Formal Logic Deduction), we first evaluate and analyze the logical
reasoning ability of the latest LLMs. Even GPT-4 can solve only half of the
problems, suggesting that pure logical reasoning isolated from knowledge is
still challenging for the LLMs, and additional training specialized in logical
reasoning is indeed essential. We next empirically verify that LMs trained on
FLD corpora acquire more generalizable reasoning ability. Furthermore, we
identify the aspects of reasoning ability on which deduction corpora can
enhance LMs and those on which they cannot, and discuss future directions on
each aspect. The released corpora serve both as learning resources and as
challenging benchmarks.
Related papers
- Inductive or Deductive? Rethinking the Fundamental Reasoning Abilities of LLMs [99.76347807139615]
Reasoning encompasses two typical types: deductive reasoning and inductive reasoning.
Despite extensive research into the reasoning capabilities of Large Language Models (LLMs), most studies have failed to rigorously differentiate between inductive and deductive reasoning.
This raises an essential question: In LLM reasoning, which poses a greater challenge - deductive or inductive reasoning?
arXiv Detail & Related papers (2024-07-31T18:47:11Z) - Disentangling Logic: The Role of Context in Large Language Model Reasoning Capabilities [31.728976421529577]
We investigate the contrast across abstract and contextualized logical problems from a comprehensive set of domains.
We focus on standard propositional logic, specifically propositional deductive and abductive logic reasoning.
Our experiments aim to provide insights into disentangling context in logical reasoning and the true reasoning capabilities of LLMs.
arXiv Detail & Related papers (2024-06-04T21:25:06Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - Phenomenal Yet Puzzling: Testing Inductive Reasoning Capabilities of Language Models with Hypothesis Refinement [92.61557711360652]
Language models (LMs) often fall short on inductive reasoning, despite achieving impressive success on research benchmarks.
We conduct a systematic study of the inductive reasoning capabilities of LMs through iterative hypothesis refinement.
We reveal several discrepancies between the inductive reasoning processes of LMs and humans, shedding light on both the potentials and limitations of using LMs in inductive reasoning tasks.
arXiv Detail & Related papers (2023-10-12T17:51:10Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.