A Logical Fallacy-Informed Framework for Argument Generation
- URL: http://arxiv.org/abs/2408.03618v2
- Date: Sat, 12 Oct 2024 13:49:49 GMT
- Title: A Logical Fallacy-Informed Framework for Argument Generation
- Authors: Luca Mouchel, Debjit Paul, Shaobo Cui, Robert West, Antoine Bosselut, Boi Faltings,
- Abstract summary: We introduce FIPO, a fallacy-informed framework that steers Large Language Models toward logically sound arguments.
Our results on argumentation datasets show that our method reduces the fallacy errors by up to 17.5%.
Our code is available atlucamouchel.com/lucamouchel/Logical-Fallacies.
- Score: 34.35377699079075
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the remarkable performance of Large Language Models (LLMs) in natural language processing tasks, they still struggle with generating logically sound arguments, resulting in potential risks such as spreading misinformation. To address this issue, we introduce FIPO, a fallacy-informed framework that leverages preference optimization methods to steer LLMs toward logically sound arguments. FIPO includes a classification loss, to capture the fine-grained information on fallacy types. Our results on argumentation datasets show that our method reduces the fallacy errors by up to 17.5%. Furthermore, our human evaluation results indicate that the quality of the generated arguments by our method significantly outperforms the fine-tuned baselines, as well as other preference optimization methods, such as DPO. These findings highlight the importance of ensuring models are aware of logical fallacies for effective argument generation. Our code is available at github.com/lucamouchel/Logical-Fallacies.
Related papers
- Are LLMs Good Zero-Shot Fallacy Classifiers? [24.3005882003251]
We focus on leveraging Large Language Models (LLMs) for zero-shot fallacy classification.
With comprehensive experiments on benchmark datasets, we suggest that LLMs could be potential zero-shot fallacy classifiers.
Our novel multi-round prompting schemes can effectively bring about more improvements, especially for small LLMs.
arXiv Detail & Related papers (2024-10-19T09:38:55Z) - Flee the Flaw: Annotating the Underlying Logic of Fallacious Arguments Through Templates and Slot-filling [15.339084849719223]
We introduce four sets of explainable templates for common informal logical fallacies.
We conduct an annotation study on top of 400 fallacious arguments taken from LOGIC dataset.
We discover that state-of-the-art language models struggle with detecting fallacy templates.
arXiv Detail & Related papers (2024-06-18T08:44:45Z) - Missci: Reconstructing Fallacies in Misrepresented Science [84.32990746227385]
Health-related misinformation on social networks can lead to poor decision-making and real-world dangers.
Missci is a novel argumentation theoretical model for fallacious reasoning.
We present Missci as a dataset to test the critical reasoning abilities of large language models.
arXiv Detail & Related papers (2024-06-05T12:11:10Z) - NL2FOL: Translating Natural Language to First-Order Logic for Logical Fallacy Detection [45.28949266878263]
We design a process to reliably detect logical fallacies by translating natural language to First-order Logic.
We then utilize Satisfiability Modulo Theory (SMT) solvers to reason about the validity of the formula.
Our approach is robust, interpretable and does not require training data or fine-tuning.
arXiv Detail & Related papers (2024-04-18T00:20:48Z) - LogicAsker: Evaluating and Improving the Logical Reasoning Ability of Large Language Models [63.14196038655506]
We introduce LogicAsker, a novel approach for evaluating and enhancing the logical reasoning capabilities of large language models (LLMs)
Our methodology reveals significant gaps in LLMs' learning of logical rules, with identified reasoning failures ranging from 29% to 90% across different models.
We leverage these findings to construct targeted demonstration examples and fine-tune data, notably enhancing logical reasoning in models like GPT-4o by up to 5%.
arXiv Detail & Related papers (2024-01-01T13:53:53Z) - Large Language Models are Few-Shot Training Example Generators: A Case Study in Fallacy Recognition [49.38757847011105]
computational fallacy recognition faces challenges due to diverse genres, domains, and types of fallacies found in datasets.
We aim to enhance existing models for fallacy recognition by incorporating additional context and by leveraging large language models to generate synthetic data.
Our evaluation results demonstrate consistent improvements across fallacy types, datasets, and generators.
arXiv Detail & Related papers (2023-11-16T04:17:47Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - Case-Based Reasoning with Language Models for Classification of Logical
Fallacies [3.511369967593153]
We propose a Case-Based Reasoning method that classifies new cases of logical fallacy.
Our experiments indicate that Case-Based Reasoning improves the accuracy and generalizability of language models.
arXiv Detail & Related papers (2023-01-27T17:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.