Multi-Perspective Stance Detection
- URL: http://arxiv.org/abs/2411.08752v1
- Date: Wed, 13 Nov 2024 16:30:41 GMT
- Title: Multi-Perspective Stance Detection
- Authors: Benedetta Muscato, Praveen Bushipaka, Gizem Gezici, Lucia Passaro, Fosca Giannotti,
- Abstract summary: Multi-perspective approach yields better classification performance than the baseline which uses the single label.
This entails that designing more inclusive perspective-aware AI models is not only an essential first step in implementing responsible and ethical AI, but it can also achieve superior results than using the traditional approaches.
- Score: 2.8073184910275293
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Subjective NLP tasks usually rely on human annotations provided by multiple annotators, whose judgments may vary due to their diverse backgrounds and life experiences. Traditional methods often aggregate multiple annotations into a single ground truth, disregarding the diversity in perspectives that arises from annotator disagreement. In this preliminary study, we examine the effect of including multiple annotations on model accuracy in classification. Our methodology investigates the performance of perspective-aware classification models in stance detection task and further inspects if annotator disagreement affects the model confidence. The results show that multi-perspective approach yields better classification performance outperforming the baseline which uses the single label. This entails that designing more inclusive perspective-aware AI models is not only an essential first step in implementing responsible and ethical AI, but it can also achieve superior results than using the traditional approaches.
Related papers
- Perspectives in Play: A Multi-Perspective Approach for More Inclusive NLP Systems [3.011820285006942]
This study proposes a new multi-perspective approach using soft labels to encourage the development of perspective aware models.<n>We conduct an analysis across diverse subjective text classification tasks, including hate speech, irony, abusive language, and stance detection.<n>Results show that the multi-perspective approach better approximates human label distributions, as measured by Jensen-Shannon Divergence (JSD)<n>Our approach exhibits lower confidence in tasks like irony and stance detection, likely due to the inherent subjectivity present in the texts.
arXiv Detail & Related papers (2025-06-25T07:53:36Z) - Embracing Diversity: A Multi-Perspective Approach with Soft Labels [3.529000007777341]
We propose a new framework for designing perspective-aware models on stance detection task, in which multiple annotators assign stances based on a controversial topic.
Results show that the multi-perspective approach yields better classification performance (higher F1-scores)
arXiv Detail & Related papers (2025-03-01T13:33:38Z) - Salvaging the Overlooked: Leveraging Class-Aware Contrastive Learning for Multi-Class Anomaly Detection [18.797864512898787]
In anomaly detection, early approaches often train separate models for individual classes, yielding high performance but posing challenges in scalability and resource management.<n>We investigate this performance observed in reconstruction-based methods, identifying the key issue: inter-class confusion.<n>This confusion emerges when a model trained in multi-class scenarios incorrectly reconstructs samples from one class as those of another, thereby exacerbating reconstruction errors.<n>By explicitly leveraging raw object category information (eg carpet or wood), we introduce local CL to refine multiscale dense features, and global CL to obtain more compact feature representations of normal patterns, thereby effectively adapting the models to multi-class
arXiv Detail & Related papers (2024-12-06T04:31:09Z) - Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - Corpus Considerations for Annotator Modeling and Scaling [9.263562546969695]
We show that the commonly used user token model consistently outperforms more complex models.
Our findings shed light on the relationship between corpus statistics and annotator modeling performance.
arXiv Detail & Related papers (2024-04-02T22:27:24Z) - An Empirical Investigation into Benchmarking Model Multiplicity for
Trustworthy Machine Learning: A Case Study on Image Classification [0.8702432681310401]
This paper offers a one-stop empirical benchmark of multiplicity across various dimensions of model design.
We also develop a framework, which we call multiplicity sheets, to benchmark multiplicity in various scenarios.
We show that multiplicity persists in deep learning models even after enforcing additional specifications during model selection.
arXiv Detail & Related papers (2023-11-24T22:30:38Z) - Leveraging Diffusion Disentangled Representations to Mitigate Shortcuts
in Underspecified Visual Tasks [92.32670915472099]
We propose an ensemble diversification framework exploiting the generation of synthetic counterfactuals using Diffusion Probabilistic Models (DPMs)
We show that diffusion-guided diversification can lead models to avert attention from shortcut cues, achieving ensemble diversity performance comparable to previous methods requiring additional data collection.
arXiv Detail & Related papers (2023-10-03T17:37:52Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Dealing with Disagreements: Looking Beyond the Majority Vote in
Subjective Annotations [6.546195629698355]
We investigate the efficacy of multi-annotator models for subjective tasks.
We show that this approach yields same or better performance than aggregating labels in the data prior to training.
Our approach also provides a way to estimate uncertainty in predictions, which we demonstrate better correlate with annotation disagreements than traditional methods.
arXiv Detail & Related papers (2021-10-12T03:12:34Z) - ALL-IN-ONE: Multi-Task Learning BERT models for Evaluating Peer
Assessments [2.544539499281093]
This paper presents two MTL models for evaluating peer-review comments by leveraging the state-of-the-art pre-trained language representation models BERT and DistilBERT.
Our results demonstrate that BERT-based models significantly outperform previous GloVe-based methods by around 6% in F1-score on tasks of detecting a single feature.
arXiv Detail & Related papers (2021-10-08T05:13:41Z) - Masked Contrastive Learning for Anomaly Detection [10.499890749386676]
We propose a task-specific variant of contrastive learning named masked contrastive learning.
We also propose a new inference method dubbed self-ensemble inference.
arXiv Detail & Related papers (2021-05-18T19:27:02Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.