Reviewer Preferences and Gender Disparities in Aesthetic Judgments
- URL: http://arxiv.org/abs/2206.08697v2
- Date: Tue, 21 Jun 2022 06:56:51 GMT
- Title: Reviewer Preferences and Gender Disparities in Aesthetic Judgments
- Authors: Ida Marie Schytt Lassen, Yuri Bizzoni, Telma Peura, Mads Rosendahl
Thomsen, Kristoffer Laigaard Nielbo
- Abstract summary: This paper uses literary reviews as a proxy for aesthetic judgement in order to identify systematic components that can be attributed to bias.
We find that judgement of literary quality in newspapers displays a gender bias in preference of male writers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Aesthetic preferences are considered highly subjective resulting in
inherently noisy judgements of aesthetic objects, yet certain aspects of
aesthetic judgement display convergent trends over time. This paper present a
study that uses literary reviews as a proxy for aesthetic judgement in order to
identify systematic components that can be attributed to bias. Specifically we
find that judgement of literary quality in newspapers displays a gender bias in
preference of male writers. Male reviewers have a same gender preference while
female reviewer show an opposite gender preference. While alternative accounts
exist of this apparent gender disparity, we argue that it reflects a cultural
gender antagonism.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Reflecting the Male Gaze: Quantifying Female Objectification in 19th and 20th Century Novels [3.0623865942628594]
We propose a framework for analyzing gender bias in terms of female objectification.
Our framework measures female objectification along two axes.
Applying our framework to 19th and 20th century novels reveals evidence of female objectification.
arXiv Detail & Related papers (2024-03-25T20:16:14Z) - Don't Overlook the Grammatical Gender: Bias Evaluation for Hindi-English
Machine Translation [0.0]
Existing evaluation benchmarks primarily focus on English as the source language of translation.
For source languages other than English, studies often employ gender-neutral sentences for bias evaluation.
We emphasise the significance of tailoring bias evaluation test sets to account for grammatical gender markers in the source language.
arXiv Detail & Related papers (2023-11-11T09:28:43Z) - ''Fifty Shades of Bias'': Normative Ratings of Gender Bias in GPT
Generated English Text [11.085070600065801]
Language serves as a powerful tool for the manifestation of societal belief systems.
Gender bias is one of the most pervasive biases in our society.
We create the first dataset of GPT-generated English text with normative ratings of gender bias.
arXiv Detail & Related papers (2023-10-26T14:34:06Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - A Moral- and Event- Centric Inspection of Gender Bias in Fairy Tales at
A Large Scale [50.92540580640479]
We computationally analyze gender bias in a fairy tale dataset containing 624 fairy tales from 7 different cultures.
We find that the number of male characters is two times that of female characters, showing a disproportionate gender representation.
Female characters turn out more associated with care-, loyalty- and sanctity- related moral words, while male characters are more associated with fairness- and authority- related moral words.
arXiv Detail & Related papers (2022-11-25T19:38:09Z) - Uncovering Implicit Gender Bias in Narratives through Commonsense
Inference [21.18458377708873]
We study gender biases associated with the protagonist in model-generated stories.
We focus on implicit biases, and use a commonsense reasoning engine to uncover them.
arXiv Detail & Related papers (2021-09-14T04:57:45Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z) - Unsupervised Discovery of Implicit Gender Bias [38.59057512390926]
We take an unsupervised approach to identifying gender bias against women at a comment level.
Our main challenge is forcing the model to focus on signs of implicit bias, rather than other artifacts in the data.
arXiv Detail & Related papers (2020-04-17T17:36:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.