Natural Language Reasoning, A Survey
- URL: http://arxiv.org/abs/2303.14725v2
- Date: Sat, 13 May 2023 15:56:44 GMT
- Title: Natural Language Reasoning, A Survey
- Authors: Fei Yu, Hongbo Zhang, Prayag Tiwari, Benyou Wang
- Abstract summary: Conceptually, we provide a distinct definition for natural language reasoning in NLP.
We conduct a comprehensive literature review on natural language reasoning in NLP.
The paper also identifies and views backward reasoning, a powerful paradigm for multi-step reasoning.
- Score: 16.80326702160048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This survey paper proposes a clearer view of natural language reasoning in
the field of Natural Language Processing (NLP), both conceptually and
practically. Conceptually, we provide a distinct definition for natural
language reasoning in NLP, based on both philosophy and NLP scenarios, discuss
what types of tasks require reasoning, and introduce a taxonomy of reasoning.
Practically, we conduct a comprehensive literature review on natural language
reasoning in NLP, mainly covering classical logical reasoning, natural language
inference, multi-hop question answering, and commonsense reasoning. The paper
also identifies and views backward reasoning, a powerful paradigm for
multi-step reasoning, and introduces defeasible reasoning as one of the most
important future directions in natural language reasoning research. We focus on
single-modality unstructured natural language text, excluding neuro-symbolic
techniques and mathematical reasoning.
Related papers
- Reasoning with Natural Language Explanations [15.281385727331473]
Explanation constitutes an archetypal feature of human rationality, underpinning learning and generalisation.
An increasing amount of research in Natural Language Inference (NLI) has started reconsidering the role that explanations play in learning and inference.
arXiv Detail & Related papers (2024-10-05T13:15:24Z) - Exploring Reasoning Biases in Large Language Models Through Syllogism: Insights from the NeuBAROCO Dataset [5.695579108997392]
This paper explores the question of how accurately current large language models can perform logical reasoning in natural language.
We present a syllogism dataset called NeuBAROCO, which consists of syllogistic reasoning problems in English and Japanese.
Our experiments with leading large language models indicate that these models exhibit reasoning biases similar to humans, along with other error tendencies.
arXiv Detail & Related papers (2024-08-08T12:10:50Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - ChatABL: Abductive Learning via Natural Language Interaction with
ChatGPT [72.83383437501577]
Large language models (LLMs) have recently demonstrated significant potential in mathematical abilities.
LLMs currently have difficulty in bridging perception, language understanding and reasoning capabilities.
This paper presents a novel method for integrating LLMs into the abductive learning framework.
arXiv Detail & Related papers (2023-04-21T16:23:47Z) - Logical Reasoning over Natural Language as Knowledge Representation: A
Survey [43.29703101875716]
This paper provides an overview on a new paradigm of logical reasoning, which uses natural language as knowledge representation and pretrained language models as reasoners.
This new paradigm is promising since it not only alleviates many challenges of formal representation but also has advantages over end-to-end neural methods.
arXiv Detail & Related papers (2023-03-21T16:56:05Z) - Language Models as Inductive Reasoners [125.99461874008703]
We propose a new paradigm (task) for inductive reasoning, which is to induce natural language rules from natural language facts.
We create a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.
We provide the first and comprehensive analysis of how well pretrained language models can induce natural language rules from natural language facts.
arXiv Detail & Related papers (2022-12-21T11:12:14Z) - An Inclusive Notion of Text [69.36678873492373]
We argue that clarity on the notion of text is crucial for reproducible and generalizable NLP.
We introduce a two-tier taxonomy of linguistic and non-linguistic elements that are available in textual sources and can be used in NLP modeling.
arXiv Detail & Related papers (2022-11-10T14:26:43Z) - Logic, Language, and Calculus [8.475081627511166]
The difference between object-language and metalanguage is crucial for logical analysis, but has yet not been examined for the field of computer science.
It is argued that inferential relations in a metalanguage (like a calculus for propositional logic) cannot represent conceptual relations of natural language.
arXiv Detail & Related papers (2020-07-06T00:52:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.