Improved Logical Reasoning of Language Models via Differentiable
Symbolic Programming
- URL: http://arxiv.org/abs/2305.03742v1
- Date: Fri, 5 May 2023 07:24:46 GMT
- Title: Improved Logical Reasoning of Language Models via Differentiable
Symbolic Programming
- Authors: Hanlin Zhang, Jiani Huang, Ziyang Li, Mayur Naik, Eric Xing
- Abstract summary: Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality.
We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning.
- Score: 12.984852480664378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained large language models (LMs) struggle to perform logical reasoning
reliably despite advances in scale and compositionality. In this work, we
tackle this challenge through the lens of symbolic programming. We propose
DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs
govern the perception of factual knowledge, and a symbolic module performs
deductive reasoning. In contrast to works that rely on hand-crafted logic
rules, our differentiable symbolic reasoning framework efficiently learns
weighted rules and applies semantic loss to further improve LMs. DSR-LM is
scalable, interpretable, and allows easy integration of prior knowledge,
thereby supporting extensive symbolic programming to robustly derive a logical
conclusion. The results of our experiments suggest that DSR-LM improves the
logical reasoning abilities of pre-trained language models, resulting in a
significant increase in accuracy of over 20% on deductive reasoning benchmarks.
Furthermore, DSR-LM outperforms a variety of competitive baselines when faced
with systematic changes in sequence length.
Related papers
- Thought-Like-Pro: Enhancing Reasoning of Large Language Models through Self-Driven Prolog-based Chain-of-Though [31.964412924094656]
Large language models (LLMs) have shown exceptional performance as general-purpose assistants.
We introduce a novel learning framework, THOUGHT-LIKE-PRO, to facilitate learning and generalization across diverse reasoning tasks.
Our empirical findings indicate that our proposed approach substantially enhances the reasoning abilities of LLMs.
arXiv Detail & Related papers (2024-07-18T18:52:10Z) - Large Language Models are Interpretable Learners [53.56735770834617]
In this paper, we show a combination of Large Language Models (LLMs) and symbolic programs can bridge the gap between expressiveness and interpretability.
The pretrained LLM with natural language prompts provides a massive set of interpretable modules that can transform raw input into natural language concepts.
As the knowledge learned by LSP is a combination of natural language descriptions and symbolic rules, it is easily transferable to humans (interpretable) and other LLMs.
arXiv Detail & Related papers (2024-06-25T02:18:15Z) - Logic Agent: Enhancing Validity with Logic Rule Invocation [24.815341366820753]
Chain-of-Thought prompting has emerged as a pivotal technique for augmenting the inferential capabilities of language models during reasoning tasks.
This paper introduces the Logic Agent (LA), an agent-based framework aimed at enhancing the validity of reasoning processes in Large Language Models.
arXiv Detail & Related papers (2024-04-28T10:02:28Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - LLMs for Relational Reasoning: How Far are We? [8.840750655261251]
Large language models (LLMs) have revolutionized many areas by achieving state-of-the-art performance on downstream tasks.
Recent efforts have demonstrated that the LLMs are poor at solving sequential decision-making problems.
arXiv Detail & Related papers (2024-01-17T08:22:52Z) - Are LLMs Rigorous Logical Reasoner? Empowering Natural Language Proof
Generation with Contrastive Stepwise Decoding [11.385103498440932]
We introduce contrastive decoding to stepwise proof generation, making use of negative reasoning paths to strengthen the model's capacity for logical deduction.
Experiments on EntailmentBank underscore the success of our method in augmenting the proof planning abilities of language models.
arXiv Detail & Related papers (2023-11-12T05:12:49Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Exploring Self-supervised Logic-enhanced Training for Large Language Models [59.227222647741094]
In this paper, we make the first attempt to investigate the feasibility of incorporating logical knowledge through self-supervised post-training.
We devise an auto-regressive objective variant of MERIt and integrate it with two LLM series, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to 13 billion.
The results on two challenging logical reasoning benchmarks demonstrate the effectiveness of LogicLLM.
arXiv Detail & Related papers (2023-05-23T06:13:10Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.