Theme Aspect Argumentation Model for Handling Fallacies
- URL: http://arxiv.org/abs/2205.15141v2
- Date: Wed, 25 Oct 2023 09:49:55 GMT
- Title: Theme Aspect Argumentation Model for Handling Fallacies
- Authors: Ryuta Arisaka, Ryoma Nakai, Yusuke Kawamoto, Takayuki Ito
- Abstract summary: We present a novel approach to characterising fallacies through formal constraints.
By identifying fallacies with formal constraints, it becomes possible to tell whether a fallacy lurks in the modelled rhetoric with a formal rigour.
- Score: 2.3230307339947274
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: From daily discussions to marketing ads to political statements, information
manipulation is rife. It is increasingly more important that we have the right
set of tools to defend ourselves from manipulative rhetoric, or fallacies.
Suitable techniques to automatically identify fallacies are being investigated
in natural language processing research. However, a fallacy in one context may
not be a fallacy in another context, so there is also a need to explain how and
why it has come to be judged a fallacy. For the explainable fallacy
identification, we present a novel approach to characterising fallacies through
formal constraints, as a viable alternative to more traditional fallacy
classifications by informal criteria. To achieve this objective, we introduce a
novel context-aware argumentation model, the theme aspect argumentation model,
which can do both: the modelling of a given argumentation as it is expressed
(rhetorical modelling); and a deeper semantic analysis of the rhetorical
argumentation model. By identifying fallacies with formal constraints, it
becomes possible to tell whether a fallacy lurks in the modelled rhetoric with
a formal rigour. We present core formal constraints for the theme aspect
argumentation model and then more formal constraints that improve its fallacy
identification capability. We show and prove the consequences of these formal
constraints. We then analyse the computational complexities of deciding the
satisfiability of the constraints.
Related papers
- A Logical Fallacy-Informed Framework for Argument Generation [34.35377699079075]
We introduce FIPO, a fallacy-informed framework that steers Large Language Models toward logically sound arguments.
Our results on argumentation datasets show that our method reduces the fallacy errors by up to 17.5%.
Our code is available atlucamouchel.com/lucamouchel/Logical-Fallacies.
arXiv Detail & Related papers (2024-08-07T08:19:44Z) - Flee the Flaw: Annotating the Underlying Logic of Fallacious Arguments Through Templates and Slot-filling [15.339084849719223]
We introduce four sets of explainable templates for common informal logical fallacies.
We conduct an annotation study on top of 400 fallacious arguments taken from LOGIC dataset.
We discover that state-of-the-art language models struggle with detecting fallacy templates.
arXiv Detail & Related papers (2024-06-18T08:44:45Z) - Discussion Graph Semantics of First-Order Logic with Equality for Reasoning about Discussion and Argumentation [0.9790236766474198]
We formulate discussion graph semantics of first-order logic with equality for reasoning about discussion and argumentation.
We achieve the generality through a top-down formulation of the semantics of first-order logic (with equality) formulas.
arXiv Detail & Related papers (2024-06-18T00:32:00Z) - Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - CASA: Causality-driven Argument Sufficiency Assessment [79.13496878681309]
We propose CASA, a zero-shot causality-driven argument sufficiency assessment framework.
PS measures how likely introducing the premise event would lead to the conclusion when both the premise and conclusion events are absent.
Experiments on two logical fallacy detection datasets demonstrate that CASA accurately identifies insufficient arguments.
arXiv Detail & Related papers (2024-01-10T16:21:18Z) - Large Language Models are Few-Shot Training Example Generators: A Case Study in Fallacy Recognition [49.38757847011105]
computational fallacy recognition faces challenges due to diverse genres, domains, and types of fallacies found in datasets.
We aim to enhance existing models for fallacy recognition by incorporating additional context and by leveraging large language models to generate synthetic data.
Our evaluation results demonstrate consistent improvements across fallacy types, datasets, and generators.
arXiv Detail & Related papers (2023-11-16T04:17:47Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Logical Fallacy Detection [40.06349885733248]
We propose the task of logical fallacy detection, and provide a new dataset (Logic) of logical fallacies generally found in text.
We show that a simple structure-aware classifier outperforms the best language model by 5.46% on Logic and 4.51% on LogicClimate.
arXiv Detail & Related papers (2022-02-28T13:18:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.