Argument Undermining: Counter-Argument Generation by Attacking Weak
Premises
- URL: http://arxiv.org/abs/2105.11752v1
- Date: Tue, 25 May 2021 08:39:14 GMT
- Title: Argument Undermining: Counter-Argument Generation by Attacking Weak
Premises
- Authors: Milad Alshomary, Shahbaz Syed, Martin Potthast and Henning Wachsmuth
- Abstract summary: We explore argument undermining, that is, countering an argument by attacking one of its premises.
We propose a pipeline approach that first assesses the premises' strength and then generates a counter-argument targeting the weak ones.
- Score: 31.463885580010192
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text generation has received a lot of attention in computational
argumentation research as of recent. A particularly challenging task is the
generation of counter-arguments. So far, approaches primarily focus on
rebutting a given conclusion, yet other ways to counter an argument exist. In
this work, we go beyond previous research by exploring argument undermining,
that is, countering an argument by attacking one of its premises. We
hypothesize that identifying the argument's weak premises is key to effective
countering. Accordingly, we propose a pipeline approach that first assesses the
premises' strength and then generates a counter-argument targeting the weak
ones. On the one hand, both manual and automatic evaluation proves the
importance of identifying weak premises in counter-argument generation. On the
other hand, when considering correctness and content richness, human annotators
favored our approach over state-of-the-art counter-argument generation.
Related papers
- Auditing Counterfire: Evaluating Advanced Counterargument Generation with Evidence and Style [11.243184875465788]
GPT-3.5 Turbo ranked highest in argument quality with strong paraphrasing and style adherence, particularly in reciprocity' style arguments.
The stylistic counter-arguments still fall short of human persuasive standards, where people also preferred reciprocal to evidence-based rebuttals.
arXiv Detail & Related papers (2024-02-13T14:53:12Z) - CASA: Causality-driven Argument Sufficiency Assessment [79.13496878681309]
We propose CASA, a zero-shot causality-driven argument sufficiency assessment framework.
PS measures how likely introducing the premise event would lead to the conclusion when both the premise and conclusion events are absent.
Experiments on two logical fallacy detection datasets demonstrate that CASA accurately identifies insufficient arguments.
arXiv Detail & Related papers (2024-01-10T16:21:18Z) - Argue with Me Tersely: Towards Sentence-Level Counter-Argument
Generation [62.069374456021016]
We present the ArgTersely benchmark for sentence-level counter-argument generation.
We also propose Arg-LlaMA for generating high-quality counter-argument.
arXiv Detail & Related papers (2023-12-21T06:51:34Z) - Exploring Jiu-Jitsu Argumentation for Writing Peer Review Rebuttals [70.22179850619519]
In many domains of argumentation, people's arguments are driven by so-called attitude roots.
Recent work in psychology suggests that instead of directly countering surface-level reasoning, one should follow an argumentation style inspired by the Jiu-Jitsu'soft' combat system.
We are the first to explore Jiu-Jitsu argumentation for peer review by proposing the novel task of attitude and theme-guided rebuttal generation.
arXiv Detail & Related papers (2023-11-07T13:54:01Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - ArgU: A Controllable Factual Argument Generator [0.0]
ArgU is a neural argument generator capable of producing factual arguments from input facts and real-world concepts.
We have compiled and released an annotated corpora of 69,428 arguments spanning six topics and six argument schemes.
arXiv Detail & Related papers (2023-05-09T10:49:45Z) - Conclusion-based Counter-Argument Generation [26.540485804067536]
In real-world debates, the most common way to counter an argument is to reason against its main point, that is, its conclusion.
We propose a multitask approach that jointly learns to generate both the conclusion and the counter of an input argument.
arXiv Detail & Related papers (2023-01-24T10:49:01Z) - Towards a Holistic View on Argument Quality Prediction [3.182597245365433]
A decisive property of arguments is their strength or quality.
While there are works on the automated estimation of argument strength, their scope is narrow.
We assess the generalization capabilities of argument quality estimation across diverse domains, the interplay with related argument mining tasks, and the impact of emotions on perceived argument strength.
arXiv Detail & Related papers (2022-05-19T18:44:23Z) - LPAttack: A Feasible Annotation Scheme for Capturing Logic Pattern of
Attacks in Arguments [33.445994192714956]
In argumentative discourse, persuasion is often achieved by refuting or attacking others arguments.
No existing studies capture complex rhetorical moves in attacks or the presuppositions or value judgements in them.
We introduce LPAttack, a novel annotation scheme that captures the common modes and complex rhetorical moves in attacks along with the implicit presuppositions and value judgements in them.
arXiv Detail & Related papers (2022-04-04T14:15:25Z) - Aspect-Controlled Neural Argument Generation [65.91772010586605]
We train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect.
Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments.
These arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments.
arXiv Detail & Related papers (2020-04-30T20:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.