Logical Fallacy Detection
- URL: http://arxiv.org/abs/2202.13758v3
- Date: Mon, 12 Dec 2022 04:47:49 GMT
- Title: Logical Fallacy Detection
- Authors: Zhijing Jin, Abhinav Lalwani, Tejas Vaidhya, Xiaoyu Shen, Yiwen Ding,
Zhiheng Lyu, Mrinmaya Sachan, Rada Mihalcea, Bernhard Sch\"olkopf
- Abstract summary: We propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text.
We show that a simple structure-aware classifier outperforms the best language model by 5.46% on Logic and 4.51% on LogicClimate.
- Score: 40.06349885733248
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reasoning is central to human intelligence. However, fallacious arguments are
common, and some exacerbate problems such as spreading misinformation about
climate change. In this paper, we propose the task of logical fallacy
detection, and provide a new dataset (Logic) of logical fallacies generally
found in text, together with an additional challenge set for detecting logical
fallacies in climate change claims (LogicClimate). Detecting logical fallacies
is a hard problem as the model must understand the underlying logical structure
of the argument. We find that existing pretrained large language models perform
poorly on this task. In contrast, we show that a simple structure-aware
classifier outperforms the best language model by 5.46% on Logic and 4.51% on
LogicClimate. We encourage future work to explore this task as (a) it can serve
as a new reasoning challenge for language models, and (b) it can have potential
applications in tackling the spread of misinformation. Our dataset and code are
available at https://github.com/causalNLP/logical-fallacy
Related papers
- A Logical Fallacy-Informed Framework for Argument Generation [34.35377699079075]
We introduce FIPO, a fallacy-informed framework that steers Large Language Models toward logically sound arguments.
Our results on argumentation datasets show that our method reduces the fallacy errors by up to 17.5%.
Our code is available atlucamouchel.com/lucamouchel/Logical-Fallacies.
arXiv Detail & Related papers (2024-08-07T08:19:44Z) - Flee the Flaw: Annotating the Underlying Logic of Fallacious Arguments Through Templates and Slot-filling [15.339084849719223]
We introduce four sets of explainable templates for common informal logical fallacies.
We conduct an annotation study on top of 400 fallacious arguments taken from LOGIC dataset.
We discover that state-of-the-art language models struggle with detecting fallacy templates.
arXiv Detail & Related papers (2024-06-18T08:44:45Z) - NL2FOL: Translating Natural Language to First-Order Logic for Logical Fallacy Detection [45.28949266878263]
We design a process to reliably detect logical fallacies by translating natural language to First-order Logic.
We then utilize Satisfiability Modulo Theory (SMT) solvers to reason about the validity of the formula.
Our approach is robust, interpretable and does not require training data or fine-tuning.
arXiv Detail & Related papers (2024-04-18T00:20:48Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - Language Models can be Logical Solvers [99.40649402395725]
We introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers.
LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers.
arXiv Detail & Related papers (2023-11-10T16:23:50Z) - Empower Nested Boolean Logic via Self-Supervised Curriculum Learning [67.46052028752327]
We find that any pre-trained language models even including large language models only behave like a random selector in the face of multi-nested logic.
To empower language models with this fundamental capability, this paper proposes a new self-supervised learning method textitCurriculum Logical Reasoning (textscClr)
arXiv Detail & Related papers (2023-10-09T06:54:02Z) - Theme Aspect Argumentation Model for Handling Fallacies [2.3230307339947274]
We present a novel approach to characterising fallacies through formal constraints.
By identifying fallacies with formal constraints, it becomes possible to tell whether a fallacy lurks in the modelled rhetoric with a formal rigour.
arXiv Detail & Related papers (2022-05-30T14:34:09Z) - On the Paradox of Learning to Reason from Data [86.13662838603761]
We show that BERT can attain near-perfect accuracy on in-distribution test examples while failing to generalize to other data distributions over the exact same problem space.
Our study provides an explanation for this paradox: instead of learning to emulate the correct reasoning function, BERT has in fact learned statistical features that inherently exist in logical reasoning problems.
arXiv Detail & Related papers (2022-05-23T17:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.