Contextualizing Argument Quality Assessment with Relevant Knowledge
- URL: http://arxiv.org/abs/2305.12280v3
- Date: Tue, 18 Jun 2024 03:11:21 GMT
- Title: Contextualizing Argument Quality Assessment with Relevant Knowledge
- Authors: Darshan Deshpande, Zhivar Sourati, Filip Ilievski, Fred Morstatter,
- Abstract summary: SPARK is a novel method for scoring argument quality based on contextualization via relevant knowledge.
We devise four augmentations that leverage large language models to provide feedback, infer hidden assumptions, supply a similar-quality argument, or give a counter-argument.
- Score: 11.367297319588411
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic assessment of the quality of arguments has been recognized as a challenging task with significant implications for misinformation and targeted speech. While real-world arguments are tightly anchored in context, existing computational methods analyze their quality in isolation, which affects their accuracy and generalizability. We propose SPARK: a novel method for scoring argument quality based on contextualization via relevant knowledge. We devise four augmentations that leverage large language models to provide feedback, infer hidden assumptions, supply a similar-quality argument, or give a counter-argument. SPARK uses a dual-encoder Transformer architecture to enable the original argument and its augmentation to be considered jointly. Our experiments in both in-domain and zero-shot setups show that SPARK consistently outperforms existing techniques across multiple metrics.
Related papers
- Localizing Factual Inconsistencies in Attributable Text Generation [91.981439746404]
We introduce QASemConsistency, a new formalism for localizing factual inconsistencies in attributable text generation.
We first demonstrate the effectiveness of the QASemConsistency methodology for human annotation.
We then implement several methods for automatically detecting localized factual inconsistencies.
arXiv Detail & Related papers (2024-10-09T22:53:48Z) - Are Large Language Models Reliable Argument Quality Annotators? [7.966402845339264]
We study the potential of using state-of-the-art large language models (LLMs) as proxies for argument quality annotators.
Our findings highlight that LLMs can produce consistent annotations, with a moderately high agreement with human experts.
arXiv Detail & Related papers (2024-04-15T11:54:27Z) - Argue with Me Tersely: Towards Sentence-Level Counter-Argument
Generation [62.069374456021016]
We present the ArgTersely benchmark for sentence-level counter-argument generation.
We also propose Arg-LlaMA for generating high-quality counter-argument.
arXiv Detail & Related papers (2023-12-21T06:51:34Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Large Language Models are Diverse Role-Players for Summarization
Evaluation [82.31575622685902]
A document summary's quality can be assessed by human annotators on various criteria, both objective ones like grammar and correctness, and subjective ones like informativeness, succinctness, and appeal.
Most of the automatic evaluation methods like BLUE/ROUGE may be not able to adequately capture the above dimensions.
We propose a new evaluation framework based on LLMs, which provides a comprehensive evaluation framework by comparing generated text and reference text from both objective and subjective aspects.
arXiv Detail & Related papers (2023-03-27T10:40:59Z) - Towards a Holistic View on Argument Quality Prediction [3.182597245365433]
A decisive property of arguments is their strength or quality.
While there are works on the automated estimation of argument strength, their scope is narrow.
We assess the generalization capabilities of argument quality estimation across diverse domains, the interplay with related argument mining tasks, and the impact of emotions on perceived argument strength.
arXiv Detail & Related papers (2022-05-19T18:44:23Z) - Automatic Debate Evaluation with Argumentation Semantics and Natural
Language Argument Graph Networks [2.4861619769660637]
We propose an original hybrid method to automatically evaluate argumentative debates.
For that purpose, we combine concepts from argumentation theory with Transformer-based architectures and neural graph networks.
We obtain promising results that lay the basis on an unexplored new instance of the automatic analysis of natural language arguments.
arXiv Detail & Related papers (2022-03-28T11:09:07Z) - KAT: A Knowledge Augmented Transformer for Vision-and-Language [56.716531169609915]
We propose a novel model - Knowledge Augmented Transformer (KAT) - which achieves a strong state-of-the-art result on the open-domain multimodal task of OK-VQA.
Our approach integrates implicit and explicit knowledge in an end to end encoder-decoder architecture, while still jointly reasoning over both knowledge sources during answer generation.
An additional benefit of explicit knowledge integration is seen in improved interpretability of model predictions in our analysis.
arXiv Detail & Related papers (2021-12-16T04:37:10Z) - Learning From Revisions: Quality Assessment of Claims in Argumentation
at Scale [12.883536911500062]
We study claim quality assessment irrespective of discussed aspects by comparing different revisions of the same claim.
We propose two tasks: assessing which claim of a revision pair is better, and ranking all versions of a claim by quality.
arXiv Detail & Related papers (2021-01-25T17:32:04Z) - A Generalised Approach for Encoding and Reasoning with Qualitative
Theories in Answer Set Programming [3.963609604649393]
A family of ASP encodings is proposed which can handle any qualitative calculus with binary relations.
This paper is under consideration for acceptance in TPLP.
arXiv Detail & Related papers (2020-08-04T13:31:25Z) - Aspect-Controlled Neural Argument Generation [65.91772010586605]
We train a language model for argument generation that can be controlled on a fine-grained level to generate sentence-level arguments for a given topic, stance, and aspect.
Our evaluation shows that our generation model is able to generate high-quality, aspect-specific arguments.
These arguments can be used to improve the performance of stance detection models via data augmentation and to generate counter-arguments.
arXiv Detail & Related papers (2020-04-30T20:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.