Towards a Holistic View on Argument Quality Prediction
- URL: http://arxiv.org/abs/2205.09803v1
- Date: Thu, 19 May 2022 18:44:23 GMT
- Title: Towards a Holistic View on Argument Quality Prediction
- Authors: Michael Fromm, Max Berrendorf, Johanna Reiml, Isabelle Mayerhofer,
Siddharth Bhargava, Evgeniy Faerman, Thomas Seidl
- Abstract summary: A decisive property of arguments is their strength or quality.
While there are works on the automated estimation of argument strength, their scope is narrow.
We assess the generalization capabilities of argument quality estimation across diverse domains, the interplay with related argument mining tasks, and the impact of emotions on perceived argument strength.
- Score: 3.182597245365433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Argumentation is one of society's foundational pillars, and, sparked by
advances in NLP and the vast availability of text data, automated mining of
arguments receives increasing attention. A decisive property of arguments is
their strength or quality. While there are works on the automated estimation of
argument strength, their scope is narrow: they focus on isolated datasets and
neglect the interactions with related argument mining tasks, such as argument
identification, evidence detection, or emotional appeal. In this work, we close
this gap by approaching argument quality estimation from multiple different
angles: Grounded on rich results from thorough empirical evaluations, we assess
the generalization capabilities of argument quality estimation across diverse
domains, the interplay with related argument mining tasks, and the impact of
emotions on perceived argument strength. We find that generalization depends on
a sufficient representation of different domains in the training part. In
zero-shot transfer and multi-task experiments, we reveal that argument quality
is among the more challenging tasks but can improve others. Finally, we show
that emotions play a minor role in argument quality than is often assumed.
Related papers
- Persuasiveness of Generated Free-Text Rationales in Subjective Decisions: A Case Study on Pairwise Argument Ranking [4.1017420444369215]
We analyze generated free-text rationales in tasks with subjective answers.
We focus on pairwise argument ranking, a highly subjective task with significant potential for real-world applications.
Our findings suggest that open-source LLMs, particularly Llama2-70B-chat, are capable of providing highly persuasive rationalizations.
arXiv Detail & Related papers (2024-06-20T00:28:33Z) - Argument Quality Assessment in the Age of Instruction-Following Large Language Models [45.832808321166844]
A critical task in any such application is the assessment of an argument's quality.
We identify the diversity of quality notions and the subjectiveness of their perception as the main hurdles towards substantial progress on argument quality assessment.
We argue that the capabilities of instruction-following large language models (LLMs) to leverage knowledge across contexts enable a much more reliable assessment.
arXiv Detail & Related papers (2024-03-24T10:43:21Z) - Argue with Me Tersely: Towards Sentence-Level Counter-Argument
Generation [62.069374456021016]
We present the ArgTersely benchmark for sentence-level counter-argument generation.
We also propose Arg-LlaMA for generating high-quality counter-argument.
arXiv Detail & Related papers (2023-12-21T06:51:34Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Contextualizing Argument Quality Assessment with Relevant Knowledge [11.367297319588411]
SPARK is a novel method for scoring argument quality based on contextualization via relevant knowledge.
We devise four augmentations that leverage large language models to provide feedback, infer hidden assumptions, supply a similar-quality argument, or give a counter-argument.
arXiv Detail & Related papers (2023-05-20T21:04:58Z) - Diversity Over Size: On the Effect of Sample and Topic Sizes for Topic-Dependent Argument Mining Datasets [49.65208986436848]
We investigate the effect of Argument Mining dataset composition in few- and zero-shot settings.
Our findings show that, while fine-tuning is mandatory to achieve acceptable model performance, using carefully composed training samples and reducing the training sample size by up to almost 90% can still yield 95% of the maximum performance.
arXiv Detail & Related papers (2022-05-23T17:14:32Z) - Argument Undermining: Counter-Argument Generation by Attacking Weak
Premises [31.463885580010192]
We explore argument undermining, that is, countering an argument by attacking one of its premises.
We propose a pipeline approach that first assesses the premises' strength and then generates a counter-argument targeting the weak ones.
arXiv Detail & Related papers (2021-05-25T08:39:14Z) - Argument Mining Driven Analysis of Peer-Reviews [4.552676857046446]
We propose an Argument Mining based approach for the assistance of editors, meta-reviewers, and reviewers.
One of our findings is that arguments used in the peer-review process differ from arguments in other domains making the transfer of pre-trained models difficult.
We provide the community with a new peer-review dataset from different computer science conferences with annotated arguments.
arXiv Detail & Related papers (2020-12-10T16:06:21Z) - Aspect-Controlled Neural Argument Generation [65.91772010586605]
We train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect.
Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments.
These arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments.
arXiv Detail & Related papers (2020-04-30T20:17:22Z) - What Changed Your Mind: The Roles of Dynamic Topics and Discourse in
Argumentation Process [78.4766663287415]
This paper presents a study that automatically analyzes the key factors in argument persuasiveness.
We propose a novel neural model that is able to track the changes of latent topics and discourse in argumentative conversations.
arXiv Detail & Related papers (2020-02-10T04:27:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.