Moral Judgments in Online Discourse are not Biased by Gender
- URL: http://arxiv.org/abs/2408.12872v1
- Date: Fri, 23 Aug 2024 07:10:48 GMT
- Title: Moral Judgments in Online Discourse are not Biased by Gender
- Authors: Lorenzo Betti, Paolo Bajardi, Gianmarco De Francisci Morales,
- Abstract summary: We use data from r/AITA, a Reddit community with 17 million members who share first-hand experiences seeking community judgment on their behavior.
We find no direct causal effect of the protagonist's gender on the received moral judgments.
Our findings complement existing correlational studies and suggest that gender roles may exert greater influence in specific social contexts.
- Score: 3.2771631221674333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The interaction between social norms and gender roles prescribes gender-specific behaviors that influence moral judgments. Here, we study how moral judgments are biased by the gender of the protagonist of a story. Using data from r/AITA, a Reddit community with 17 million members who share first-hand experiences seeking community judgment on their behavior, we employ machine learning techniques to match stories describing similar situations that differ only by the protagonist's gender. We find no direct causal effect of the protagonist's gender on the received moral judgments, except for stories about ``friendship and relationships'', where male protagonists receive more negative judgments. Our findings complement existing correlational studies and suggest that gender roles may exert greater influence in specific social contexts. These results have implications for understanding sociological constructs and highlight potential biases in data used to train large language models.
Related papers
- Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - MoCa: Measuring Human-Language Model Alignment on Causal and Moral
Judgment Tasks [49.60689355674541]
A rich literature in cognitive science has studied people's causal and moral intuitions.
This work has revealed a number of factors that systematically influence people's judgments.
We test whether large language models (LLMs) make causal and moral judgments about text-based scenarios that align with human participants.
arXiv Detail & Related papers (2023-10-30T15:57:32Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Fairness in AI Systems: Mitigating gender bias from language-vision
models [0.913755431537592]
We study the extent of the impact of gender bias in existing datasets.
We propose a methodology to mitigate its impact in caption based language vision models.
arXiv Detail & Related papers (2023-05-03T04:33:44Z) - A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale [50.92540580640479]
We computationally analyze gender bias in a fairy tale dataset containing 624 fairy tales from 7 different cultures.
We find that the number of male characters is two times that of female characters, showing a disproportionate gender representation.
Female characters turn out more associated with care-, loyalty- and sanctity- related moral words, while male characters are more associated with fairness- and authority- related moral words.
arXiv Detail & Related papers (2022-11-25T19:38:09Z) - Uncovering Implicit Gender Bias in Narratives through Commonsense
Inference [21.18458377708873]
We study gender biases associated with the protagonist in model-generated stories.
We focus on implicit biases, and use a commonsense reasoning engine to uncover them.
arXiv Detail & Related papers (2021-09-14T04:57:45Z) - Can gender inequality be created without inter-group discrimination? [0.0]
We test whether a simple agent-based dynamic process could create gender inequality.
We simulate a population who interact in pairs of randomly selected agents to influence each other about their esteem judgments of self and others.
Without prejudice, stereotypes, segregation, or categorization, our model produces inter-group inequality of self-esteem and status that is stable, consensual, and exhibits characteristics of glass ceiling effects.
arXiv Detail & Related papers (2020-05-05T07:33:27Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z) - Unsupervised Discovery of Implicit Gender Bias [38.59057512390926]
We take an unsupervised approach to identifying gender bias against women at a comment level.
Our main challenge is forcing the model to focus on signs of implicit bias, rather than other artifacts in the data.
arXiv Detail & Related papers (2020-04-17T17:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.