Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting
- URL: http://arxiv.org/abs/2309.07034v2
- Date: Thu, 8 Feb 2024 16:35:36 GMT
- Title: Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting
- Authors: Tilman Beck, Hendrik Schuff, Anne Lauscher, Iryna Gurevych
- Abstract summary: sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
- Score: 64.80538055623842
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Annotators' sociodemographic backgrounds (i.e., the individual compositions
of their gender, age, educational background, etc.) have a strong impact on
their decisions when working on subjective NLP tasks, such as toxic language
detection. Often, heterogeneous backgrounds result in high disagreements. To
model this variation, recent work has explored sociodemographic prompting, a
technique, which steers the output of prompt-based models towards answers that
humans with specific sociodemographic profiles would give. However, the
available NLP literature disagrees on the efficacy of this technique - it
remains unclear for which tasks and scenarios it can help, and the role of the
individual factors in sociodemographic prompting is still unexplored. We
address this research gap by presenting the largest and most comprehensive
study of sociodemographic prompting today. We analyze its influence on model
sensitivity, performance and robustness across seven datasets and six
instruction-tuned model families. We show that sociodemographic information
affects model predictions and can be beneficial for improving zero-shot
learning in subjective NLP tasks. However, its outcomes largely vary for
different model types, sizes, and datasets, and are subject to large variance
with regards to prompt formulations. Most importantly, our results show that
sociodemographic prompting should be used with care for sensitive applications,
such as toxicity annotation or when studying LLM alignment. Code and data:
https://github.com/UKPLab/arxiv2023-sociodemographic-prompting
Related papers
- Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Explainable Depression Symptom Detection in Social Media [2.677715367737641]
We propose using transformer-based architectures to detect and explain the appearance of depressive symptom markers in the users' writings.
Our natural language explanations enable clinicians to interpret the models' decisions based on validated symptoms.
arXiv Detail & Related papers (2023-10-20T17:05:27Z) - A Simple and Flexible Modeling for Mental Disorder Detection by Learning
from Clinical Questionnaires [0.2580765958706853]
We propose a novel approach that captures the semantic meanings directly from the text and compares them to symptom-related descriptions.
Our detailed analysis shows that the proposed model is effective at leveraging domain knowledge, transferable to other mental disorders, and providing interpretable detection results.
arXiv Detail & Related papers (2023-06-05T15:23:55Z) - CausalDialogue: Modeling Utterance-level Causality in Conversations [83.03604651485327]
We have compiled and expanded upon a new dataset called CausalDialogue through crowd-sourcing.
This dataset includes multiple cause-effect pairs within a directed acyclic graph (DAG) structure.
We propose a causality-enhanced method called Exponential Average Treatment Effect (ExMATE) to enhance the impact of causality at the utterance level in training neural conversation models.
arXiv Detail & Related papers (2022-12-20T18:31:50Z) - On the Limitations of Sociodemographic Adaptation with Transformers [34.768337465321395]
Sociodemographic factors (e.g., gender or age) shape our language.
Previous work showed that incorporating specific sociodemographic factors can consistently improve performance for various NLP tasks.
We use three common specialization methods proven effective for incorporating external knowledge into pretrained Transformers.
arXiv Detail & Related papers (2022-08-01T17:58:02Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Efficient Multi-Modal Embeddings from Structured Data [0.0]
Multi-modal word semantics aims to enhance embeddings with perceptual input.
Visual grounding can contribute to linguistic applications as well.
New embedding conveys complementary information for text based embeddings.
arXiv Detail & Related papers (2021-10-06T08:42:09Z) - Understanding the Performance of Knowledge Graph Embeddings in Drug
Discovery [14.839673015887275]
Knowledge Graphs (KGs) and associated Knowledge Graph Embedding (KGE) models have recently begun to be explored in the context of drug discovery.
In this study we investigate, over the course of many thousands of experiments, the predictive performance of five KGE models on two public drug discovery-oriented KGs.
Our results highlight that these factors have significant impact on performance and can even affect the ranking of models.
arXiv Detail & Related papers (2021-05-17T11:39:54Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Explaining Black Box Predictions and Unveiling Data Artifacts through
Influence Functions [55.660255727031725]
Influence functions explain the decisions of a model by identifying influential training examples.
We conduct a comparison between influence functions and common word-saliency methods on representative tasks.
We develop a new measure based on influence functions that can reveal artifacts in training data.
arXiv Detail & Related papers (2020-05-14T00:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.