Gendered Mental Health Stigma in Masked Language Models
- URL: http://arxiv.org/abs/2210.15144v2
- Date: Tue, 11 Apr 2023 23:54:53 GMT
- Title: Gendered Mental Health Stigma in Masked Language Models
- Authors: Inna Wanyin Lin, Lucille Njoo, Anjalie Field, Ashish Sharma, Katharina
Reinecke, Tim Althoff, Yulia Tsvetkov
- Abstract summary: We investigate gendered mental health stigma in masked language models.
We find that models are consistently more likely to predict female subjects than male in sentences about having a mental health condition.
- Score: 38.766854150355634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mental health stigma prevents many individuals from receiving the appropriate
care, and social psychology studies have shown that mental health tends to be
overlooked in men. In this work, we investigate gendered mental health stigma
in masked language models. In doing so, we operationalize mental health stigma
by developing a framework grounded in psychology research: we use clinical
psychology literature to curate prompts, then evaluate the models' propensity
to generate gendered words. We find that masked language models capture
societal stigma about gender in mental health: models are consistently more
likely to predict female subjects than male in sentences about having a mental
health condition (32% vs. 19%), and this disparity is exacerbated for sentences
that indicate treatment-seeking behavior. Furthermore, we find that different
models capture dimensions of stigma differently for men and women, associating
stereotypes like anger, blame, and pity more with women with mental health
conditions than with men. In showing the complex nuances of models' gendered
mental health stigma, we demonstrate that context and overlapping dimensions of
identity are important considerations when assessing computational models'
social biases.
Related papers
- MentalArena: Self-play Training of Language Models for Diagnosis and Treatment of Mental Health Disorders [59.515827458631975]
Mental health disorders are one of the most serious diseases in the world.
Privacy concerns limit the accessibility of personalized treatment data.
MentalArena is a self-play framework to train language models.
arXiv Detail & Related papers (2024-10-09T13:06:40Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution [20.21748776472278]
We investigate whether emotions are gendered, and whether these variations are based on societal stereotypes.
We find that all models consistently exhibit gendered emotions, influenced by gender stereotypes.
Our study sheds light on the complex societal interplay between language, gender, and emotion.
arXiv Detail & Related papers (2024-03-05T17:04:05Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Emotion-based Modeling of Mental Disorders on Social Media [11.945854832533234]
One in four people will be affected by mental disorders at some point in their lives.
We propose a model for passively detecting mental disorders using conversations on Reddit.
arXiv Detail & Related papers (2022-01-24T04:41:02Z) - Gender and Racial Fairness in Depression Research using Social Media [13.512136878021854]
Social media data has spurred interest in mental health research from a computational lens.
Previous research has raised concerns about possible biases in models produced from this data.
Our study concludes with recommendations on how to avoid these biases in future research.
arXiv Detail & Related papers (2021-03-18T22:34:41Z) - Language, communication and society: a gender based linguistics analysis [0.0]
The purpose of this study is to find evidence for supporting the hypothesis that language is the mirror of our thinking.
The answers have been analysed to see if gender stereotypes were present such as the attribution of psychological and behavioural characteristics.
arXiv Detail & Related papers (2020-07-14T08:38:37Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.