A & B == B & A: Triggering Logical Reasoning Failures in Large Language
Models
- URL: http://arxiv.org/abs/2401.00757v1
- Date: Mon, 1 Jan 2024 13:53:53 GMT
- Title: A & B == B & A: Triggering Logical Reasoning Failures in Large Language
Models
- Authors: Yuxuan Wan, Wenxuan Wang, Yiliu Yang, Youliang Yuan, Jen-tse Huang,
Pinjia He, Wenxiang Jiao, Michael R. Lyu
- Abstract summary: We introduce LogicAsker, an automatic approach that comprehensively evaluates and improves the logical reasoning abilities of LLMs.
We evaluate LogicAsker on six widely deployed LLMs, including GPT-3, ChatGPT, GPT-4, Bard, Vicuna, and Guanaco.
The results show that test cases from LogicAsker can find logical reasoning failures in different LLMs with a rate of 25% - 94%.
- Score: 65.86149763739141
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in large language models (LLMs) have propelled Artificial
Intelligence (AI) to new heights, enabling breakthroughs in various tasks such
as writing assistance, code generation, and machine translation. A significant
distinction of advanced LLMs, such as ChatGPT, is their demonstrated ability to
"reason." However, evaluating the reasoning ability of LLMs remains a challenge
as most existing evaluations focus on their accuracy on the downstream tasks
rather than directly assessing their reasoning processes. Efforts have been
made to develop benchmarks and metrics to assess reasoning in LLMs, but they
suffer from data leakage or limited scope. In this paper, we introduce
LogicAsker, an automatic approach that comprehensively evaluates and improves
the logical reasoning abilities of LLMs under a set of atomic reasoning skills
based on propositional and predicate logic. The results provide insights into
LLMs' reasoning abilities and reveal the logical rules the LLMs did not learn
well. We evaluate LogicAsker on six widely deployed LLMs, including GPT-3,
ChatGPT, GPT-4, Bard, Vicuna, and Guanaco. The results show that test cases
from LogicAsker can find logical reasoning failures in different LLMs with a
rate of 25\% - 94\%. In addition, the test cases of LogicAsker can be further
used to design demonstration examples for in-context learning, which
effectively improves the logical reasoning ability of LLMs, e.g., 10\% for
GPT-4. As far as we know, our work is the first to create prompts based on
testing results to improve LLMs' formal reasoning ability effectively. All the
code, data, and results will be released for reproduction and future research.
Related papers
- LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Reason from Fallacy: Enhancing Large Language Models' Logical Reasoning through Logical Fallacy Understanding [40.2816930342597]
Large Language Models (LLMs) have demonstrated good performance in many reasoning tasks.
But they still struggle with some complicated reasoning tasks including logical reasoning.
We propose five concrete tasks from three cognitive dimensions of WHAT, WHY, and HOW in this paper.
arXiv Detail & Related papers (2024-04-04T08:38:03Z) - Do Large Language Models Understand Logic or Just Mimick Context? [14.081178100662163]
This paper investigates the reasoning capabilities of large language models (LLMs) on two logical reasoning datasets.
It is found that LLMs do not truly understand logical rules; rather, in-context learning has simply enhanced the likelihood of these models arriving at the correct answers.
arXiv Detail & Related papers (2024-02-19T12:12:35Z) - CLOMO: Counterfactual Logical Modification with Large Language Models [109.60793869938534]
We introduce a novel task, Counterfactual Logical Modification (CLOMO), and a high-quality human-annotated benchmark.
In this task, LLMs must adeptly alter a given argumentative text to uphold a predetermined logical relationship.
We propose an innovative evaluation metric, the Self-Evaluation Score (SES), to directly evaluate the natural language output of LLMs.
arXiv Detail & Related papers (2023-11-29T08:29:54Z) - Learning To Teach Large Language Models Logical Reasoning [33.88499005859982]
Large language models (LLMs) have gained enormous attention from both academia and industry.
However, current LLMs still output unreliable content in practical reasoning tasks due to their inherent issues.
arXiv Detail & Related papers (2023-10-13T14:53:06Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Exploring Self-supervised Logic-enhanced Training for Large Language Models [59.227222647741094]
In this paper, we make the first attempt to investigate the feasibility of incorporating logical knowledge through self-supervised post-training.
We devise an auto-regressive objective variant of MERIt and integrate it with two LLM series, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to 13 billion.
The results on two challenging logical reasoning benchmarks demonstrate the effectiveness of LogicLLM.
arXiv Detail & Related papers (2023-05-23T06:13:10Z) - Logic-LM: Empowering Large Language Models with Symbolic Solvers for
Faithful Logical Reasoning [101.26814728062065]
Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems.
This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving.
arXiv Detail & Related papers (2023-05-20T22:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.