Logical Reasoning over Natural Language as Knowledge Representation: A
Survey
- URL: http://arxiv.org/abs/2303.12023v2
- Date: Fri, 16 Feb 2024 14:30:33 GMT
- Title: Logical Reasoning over Natural Language as Knowledge Representation: A
Survey
- Authors: Zonglin Yang, Xinya Du, Rui Mao, Jinjie Ni, Erik Cambria
- Abstract summary: This paper provides an overview on a new paradigm of logical reasoning, which uses natural language as knowledge representation and pretrained language models as reasoners.
This new paradigm is promising since it not only alleviates many challenges of formal representation but also has advantages over end-to-end neural methods.
- Score: 43.29703101875716
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Logical reasoning is central to human cognition and intelligence. It includes
deductive, inductive, and abductive reasoning. Past research of logical
reasoning within AI uses formal language as knowledge representation and
symbolic reasoners. However, reasoning with formal language has proved
challenging (e.g., brittleness and knowledge-acquisition bottleneck). This
paper provides a comprehensive overview on a new paradigm of logical reasoning,
which uses natural language as knowledge representation and pretrained language
models as reasoners, including philosophical definition and categorization of
logical reasoning, advantages of the new paradigm, benchmarks and methods,
challenges of the new paradigm, possible future directions, and relation to
related NLP fields. This new paradigm is promising since it not only alleviates
many challenges of formal representation but also has advantages over
end-to-end neural methods. This survey focus on transformer-based LLMs
explicitly working on deductive, inductive, and abductive reasoning over
English representation.
Related papers
- Exploring Reasoning Biases in Large Language Models Through Syllogism: Insights from the NeuBAROCO Dataset [5.695579108997392]
This paper explores the question of how accurately current large language models can perform logical reasoning in natural language.
We present a syllogism dataset called NeuBAROCO, which consists of syllogistic reasoning problems in English and Japanese.
Our experiments with leading large language models indicate that these models exhibit reasoning biases similar to humans, along with other error tendencies.
arXiv Detail & Related papers (2024-08-08T12:10:50Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - Natural Language Reasoning, A Survey [16.80326702160048]
Conceptually, we provide a distinct definition for natural language reasoning in NLP.
We conduct a comprehensive literature review on natural language reasoning in NLP.
The paper also identifies and views backward reasoning, a powerful paradigm for multi-step reasoning.
arXiv Detail & Related papers (2023-03-26T13:44:18Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.