Can gender inequality be created without inter-group discrimination?
- URL: http://arxiv.org/abs/2005.01980v1
- Date: Tue, 5 May 2020 07:33:27 GMT
- Title: Can gender inequality be created without inter-group discrimination?
- Authors: Sylvie Huet1, Floriana Gargiulo, and Felicia Pratto
- Abstract summary: We test whether a simple agent-based dynamic process could create gender inequality.
We simulate a population who interact in pairs of randomly selected agents to influence each other about their esteem judgments of self and others.
Without prejudice, stereotypes, segregation, or categorization, our model produces inter-group inequality of self-esteem and status that is stable, consensual, and exhibits characteristics of glass ceiling effects.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding human societies requires knowing how they develop gender
hierarchies which are ubiquitous. We test whether a simple agent-based dynamic
process could create gender inequality. Relying on evidence of gendered status
concerns, self-construals, and cognitive habits, our model included a gender
difference in how responsive male-like and female-like agents are to others'
opinions about the level of esteem for someone. We simulate a population who
interact in pairs of randomly selected agents to influence each other about
their esteem judgments of self and others. Half the agents are more influenced
by their relative status rank during the interaction than the others. Without
prejudice, stereotypes, segregation, or categorization, our model produces
inter-group inequality of self-esteem and status that is stable, consensual,
and exhibits characteristics of glass ceiling effects. Outcomes are not
affected by relative group size. We discuss implications for group orientation
to dominance and individuals' motivations to exchange.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes [12.704072523930444]
This study investigates eleven strategies to automatically counter-act and challenge gender stereotypes in online communications.
We present AI-generated gender-based counter-stereotypes to study participants and ask them to assess their offensiveness, plausibility, and potential effectiveness.
arXiv Detail & Related papers (2024-04-18T01:48:28Z) - Protected group bias and stereotypes in Large Language Models [2.1122940074160357]
This paper investigates the behavior of Large Language Models (LLMs) in the domains of ethics and fairness.
We find bias across minoritized groups, but in particular in the domains of gender and sexuality, as well as Western bias.
arXiv Detail & Related papers (2024-03-21T00:21:38Z) - The Male CEO and the Female Assistant: Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework to systematically evaluate T2I models in dual-subject generation setting.
PST is a dual-subject generation task, i.e. generating two people in the same image.
We show that despite generating seemingly fair or even anti-stereotype single-person images, DALLE-3 still shows notable biases under PST.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Comparing Humans and Models on a Similar Scale: Towards Cognitive Gender
Bias Evaluation in Coreference Resolution [11.711298780873468]
Can we quantify the extent to which model biases reflect human behaviour?
We make several observations from two crowdsourcing experiments of gender bias in coreference resolution.
On real-world data humans make $sim$3% more gender-biased decisions compared to models, while on synthetic data models are $sim$12% more biased.
arXiv Detail & Related papers (2023-05-24T17:51:44Z) - Fairness in AI Systems: Mitigating gender bias from language-vision
models [0.913755431537592]
We study the extent of the impact of gender bias in existing datasets.
We propose a methodology to mitigate its impact in caption based language vision models.
arXiv Detail & Related papers (2023-05-03T04:33:44Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - What's Sex Got To Do With Fair Machine Learning? [0.0]
We argue that many approaches to "fairness" require one to specify a causal model of the data generating process.
We show this by exploring the formal assumption of modularity in causal models.
We argue that this ontological picture is false. Many of the "effects" that sex purportedly "causes" are in fact features of sex as a social status.
arXiv Detail & Related papers (2020-06-02T16:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.