Will Annotators Disagree? Identifying Subjectivity in Value-Laden Arguments
- URL: http://arxiv.org/abs/2509.06704v1
- Date: Mon, 08 Sep 2025 13:59:34 GMT
- Title: Will Annotators Disagree? Identifying Subjectivity in Value-Laden Arguments
- Authors: Amir Homayounirad, Enrico Liscio, Tong Wang, Catholijn M. Jonker, Luciano C. Siebert,
- Abstract summary: We explore methods for identifying subjectivity in recognizing the human values that motivate arguments.<n>Our experiments show that direct subjectivity identification significantly improves the model performance of flagging subjective arguments.<n>Our proposed methods can help identify arguments that individuals may interpret differently, fostering a more nuanced annotation process.
- Score: 4.62776435232425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Aggregating multiple annotations into a single ground truth label may hide valuable insights into annotator disagreement, particularly in tasks where subjectivity plays a crucial role. In this work, we explore methods for identifying subjectivity in recognizing the human values that motivate arguments. We evaluate two main approaches: inferring subjectivity through value prediction vs. directly identifying subjectivity. Our experiments show that direct subjectivity identification significantly improves the model performance of flagging subjective arguments. Furthermore, combining contrastive loss with binary cross-entropy loss does not improve performance but reduces the dependency on per-label subjectivity. Our proposed methods can help identify arguments that individuals may interpret differently, fostering a more nuanced annotation process.
Related papers
- Imagination Helps Visual Reasoning, But Not Yet in Latent Space [65.80396132375571]
We investigate the validity of latent reasoning using Causal Mediation Analysis.<n>We show that latent tokens encode limited visual information and exhibit high similarity.<n>We propose a straightforward alternative named CapImagine, which teaches the model to explicitly imagine using text.
arXiv Detail & Related papers (2026-02-26T08:56:23Z) - Do Vision-Language Models Understand Visual Persuasiveness? [0.0]
We construct a high-consensus dataset for binary persuasiveness judgment.<n>We introduce the taxonomy of Visual Persuasive Factors (VPFs)<n>We also explore cognitive steering and knowledge injection strategies for persuasion-relevant reasoning.
arXiv Detail & Related papers (2025-11-21T08:28:02Z) - Categorical Emotions or Appraisals - Which Emotion Model Explains Argument Convincingness Better? [7.221399245137941]
We argue that the emotion an argument evokes in a recipient is subjective.<n>It depends on the recipient's goals, standards, prior knowledge, and stance.<n>This work presents the first systematic comparison between emotion models for convincingness prediction.
arXiv Detail & Related papers (2025-11-10T14:53:04Z) - ROVER: Benchmarking Reciprocal Cross-Modal Reasoning for Omnimodal Generation [79.17352367219736]
ROVER tests the use of one modality to guide, verify, or refine outputs in the other.<n>ROVER is a human-annotated benchmark that explicitly targets reciprocal cross-modal reasoning.
arXiv Detail & Related papers (2025-11-03T02:27:46Z) - Investigating Subjective Factors of Argument Strength: Storytelling, Emotions, and Hedging [14.950106052554291]
We study the impact of subjective factors $-$ emotions, storytelling, and hedging on objective argument quality and subjective persuasion.<n>Our results show that storytelling and hedging have contrasting effects on objective and subjective argument quality, while the influence of emotions depends on their rhetoric utilization rather than the domain.
arXiv Detail & Related papers (2025-07-23T11:09:52Z) - Multi-Perspective Stance Detection [2.8073184910275293]
Multi-perspective approach yields better classification performance than the baseline which uses the single label.
This entails that designing more inclusive perspective-aware AI models is not only an essential first step in implementing responsible and ethical AI, but it can also achieve superior results than using the traditional approaches.
arXiv Detail & Related papers (2024-11-13T16:30:41Z) - Understanding Self-Supervised Learning of Speech Representation via
Invariance and Redundancy Reduction [0.45060992929802207]
Self-supervised learning (SSL) has emerged as a promising paradigm for learning flexible speech representations from unlabeled data.
This study provides an empirical analysis of Barlow Twins (BT), an SSL technique inspired by theories of redundancy reduction in human perception.
arXiv Detail & Related papers (2023-09-07T10:23:59Z) - Using Natural Language Explanations to Rescale Human Judgments [81.66697572357477]
We propose a method to rescale ordinal annotations and explanations using large language models (LLMs)<n>We feed annotators' Likert ratings and corresponding explanations into an LLM and prompt it to produce a numeric score anchored in a scoring rubric.<n>Our method rescales the raw judgments without impacting agreement and brings the scores closer to human judgments grounded in the same scoring rubric.
arXiv Detail & Related papers (2023-05-24T06:19:14Z) - Seeking Subjectivity in Visual Emotion Distribution Learning [93.96205258496697]
Visual Emotion Analysis (VEA) aims to predict people's emotions towards different visual stimuli.
Existing methods often predict visual emotion distribution in a unified network, neglecting the inherent subjectivity in its crowd voting process.
We propose a novel textitSubjectivity Appraise-and-Match Network (SAMNet) to investigate the subjectivity in visual emotion distribution.
arXiv Detail & Related papers (2022-07-25T02:20:03Z) - Towards a Holistic View on Argument Quality Prediction [3.182597245365433]
A decisive property of arguments is their strength or quality.
While there are works on the automated estimation of argument strength, their scope is narrow.
We assess the generalization capabilities of argument quality estimation across diverse domains, the interplay with related argument mining tasks, and the impact of emotions on perceived argument strength.
arXiv Detail & Related papers (2022-05-19T18:44:23Z) - Dealing with Disagreements: Looking Beyond the Majority Vote in
Subjective Annotations [6.546195629698355]
We investigate the efficacy of multi-annotator models for subjective tasks.
We show that this approach yields same or better performance than aggregating labels in the data prior to training.
Our approach also provides a way to estimate uncertainty in predictions, which we demonstrate better correlate with annotation disagreements than traditional methods.
arXiv Detail & Related papers (2021-10-12T03:12:34Z) - Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty
Estimation for Facial Expression Recognition [59.52434325897716]
We propose a solution, named DMUE, to address the problem of annotation ambiguity from two perspectives.
For the former, an auxiliary multi-branch learning framework is introduced to better mine and describe the latent distribution in the label space.
For the latter, the pairwise relationship of semantic feature between instances are fully exploited to estimate the ambiguity extent in the instance space.
arXiv Detail & Related papers (2021-04-01T03:21:57Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.