Modeling Appropriate Language in Argumentation
- URL: http://arxiv.org/abs/2305.14935v1
- Date: Wed, 24 May 2023 09:17:05 GMT
- Title: Modeling Appropriate Language in Argumentation
- Authors: Timon Ziegenbein, Shahbaz Syed, Felix Lange, Martin Potthast and
Henning Wachsmuth
- Abstract summary: We operationalize appropriate language in argumentation for the first time.
We derive a new taxonomy of 14 dimensions that determine inappropriate language in online discussions.
- Score: 34.90028129715041
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online discussion moderators must make ad-hoc decisions about whether the
contributions of discussion participants are appropriate or should be removed
to maintain civility. Existing research on offensive language and the resulting
tools cover only one aspect among many involved in such decisions. The question
of what is considered appropriate in a controversial discussion has not yet
been systematically addressed. In this paper, we operationalize appropriate
language in argumentation for the first time. In particular, we model
appropriateness through the absence of flaws, grounded in research on argument
quality assessment, especially in aspects from rhetoric. From these, we derive
a new taxonomy of 14 dimensions that determine inappropriate language in online
discussions. Building on three argument quality corpora, we then create a
corpus of 2191 arguments annotated for the 14 dimensions. Empirical analyses
support that the taxonomy covers the concept of appropriateness
comprehensively, showing several plausible correlations with argument quality
dimensions. Moreover, results of baseline approaches to assessing
appropriateness suggest that all dimensions can be modeled computationally on
the corpus.
Related papers
- From Argumentation to Deliberation: Perspectivized Stance Vectors for Fine-grained (Dis)agreement Analysis [17.184962277653902]
We develop a framework for a deliberative analysis of arguments in a computational argumentation setup.
We conduct a fine-grained analysis of perspectivized stances expressed in the arguments of different arguers or stakeholders on a given issue.
We formalize this analysis in Perspectivized Stance Vectors that characterize the individual perspectivized stances of all arguers on a given issue.
arXiv Detail & Related papers (2025-02-10T13:08:46Z) - Overview of PerpectiveArg2024: The First Shared Task on Perspective Argument Retrieval [56.66761232081188]
We present a novel dataset covering demographic and socio-cultural (socio) variables, such as age, gender, and political attitude, representing minority and majority groups in society.
We find substantial challenges in incorporating perspectivism, especially when aiming for personalization based solely on the text of arguments without explicitly providing socio profiles.
While we bootstrap perspective argument retrieval, further research is essential to optimize retrieval systems to facilitate personalization and reduce polarization.
arXiv Detail & Related papers (2024-07-29T03:14:57Z) - Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - A Unifying Framework for Learning Argumentation Semantics [50.69905074548764]
We present a novel framework, which uses an Inductive Logic Programming approach to learn the acceptability semantics for several abstract and structured argumentation frameworks in an interpretable way.
Our framework outperforms existing argumentation solvers, thus opening up new future research directions in the area of formal argumentation and human-machine dialogues.
arXiv Detail & Related papers (2023-10-18T20:18:05Z) - Natural Language Decompositions of Implicit Content Enable Better Text
Representations [56.85319224208865]
We introduce a method for the analysis of text that takes implicitly communicated content explicitly into account.
We use a large language model to produce sets of propositions that are inferentially related to the text that has been observed.
Our results suggest that modeling the meanings behind observed language, rather than the literal text alone, is a valuable direction for NLP.
arXiv Detail & Related papers (2023-05-23T23:45:20Z) - Exploring Discourse Structures for Argument Impact Classification [48.909640432326654]
This paper empirically shows that the discourse relations between two arguments along the context path are essential factors for identifying the persuasive power of an argument.
We propose DisCOC to inject and fuse the sentence-level structural information with contextualized features derived from large-scale language models.
arXiv Detail & Related papers (2021-06-02T06:49:19Z) - Creating a Domain-diverse Corpus for Theory-based Argument Quality
Assessment [6.654552816487819]
We describe GAQCorpus, the first large, domain-diverse annotated corpus of theory-based AQ.
We discuss how we designed the annotation task to reliably collect a large number of judgments with crowdsourcing.
Our work will inform research on theory-based argumentation annotation and enable the creation of more diverse corpora to support computational AQ assessment.
arXiv Detail & Related papers (2020-11-03T09:40:25Z) - Intrinsic Quality Assessment of Arguments [21.261009977405898]
We study the intrinsic computational assessment of 15 dimensions, i.e., only learning from an argument's text.
We observe moderate but significant learning success for most dimensions.
arXiv Detail & Related papers (2020-10-23T15:16:10Z) - AMPERSAND: Argument Mining for PERSuAsive oNline Discussions [41.06165177604387]
We propose a computational model for argument mining in online persuasive discussion forums.
Our approach relies on identifying relations between components of arguments in a discussion thread.
Our models obtain significant improvements compared to recent state-of-the-art approaches.
arXiv Detail & Related papers (2020-04-30T10:33:40Z) - The Role of Pragmatic and Discourse Context in Determining Argument
Impact [39.70446357000737]
This paper presents a new dataset to initiate the study of this aspect of argumentation.
It consists of a diverse collection of arguments covering 741 controversial topics and comprising over 47,000 claims.
We propose predictive models that incorporate the pragmatic and discourse context of argumentative claims and show that they outperform models that rely on claim-specific linguistic features for predicting the perceived impact of individual claims within a particular line of argument.
arXiv Detail & Related papers (2020-04-06T23:00:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.