Can gender inequality be created without inter-group discrimination?
- URL: http://arxiv.org/abs/2005.01980v1
- Date: Tue, 5 May 2020 07:33:27 GMT
- Title: Can gender inequality be created without inter-group discrimination?
- Authors: Sylvie Huet1, Floriana Gargiulo, and Felicia Pratto
- Abstract summary: We test whether a simple agent-based dynamic process could create gender inequality.
We simulate a population who interact in pairs of randomly selected agents to influence each other about their esteem judgments of self and others.
Without prejudice, stereotypes, segregation, or categorization, our model produces inter-group inequality of self-esteem and status that is stable, consensual, and exhibits characteristics of glass ceiling effects.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding human societies requires knowing how they develop gender
hierarchies which are ubiquitous. We test whether a simple agent-based dynamic
process could create gender inequality. Relying on evidence of gendered status
concerns, self-construals, and cognitive habits, our model included a gender
difference in how responsive male-like and female-like agents are to others'
opinions about the level of esteem for someone. We simulate a population who
interact in pairs of randomly selected agents to influence each other about
their esteem judgments of self and others. Half the agents are more influenced
by their relative status rank during the interaction than the others. Without
prejudice, stereotypes, segregation, or categorization, our model produces
inter-group inequality of self-esteem and status that is stable, consensual,
and exhibits characteristics of glass ceiling effects. Outcomes are not
affected by relative group size. We discuss implications for group orientation
to dominance and individuals' motivations to exchange.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Revealing and Reducing Gender Biases in Vision and Language Assistants (VLAs) [82.57490175399693]
We study gender bias in 22 popular image-to-text vision-language assistants (VLAs)
Our results show that VLAs replicate human biases likely present in the data, such as real-world occupational imbalances.
To eliminate the gender bias in these models, we find that finetuning-based debiasing methods achieve the best tradeoff between debiasing and retaining performance on downstream tasks.
arXiv Detail & Related papers (2024-10-25T05:59:44Z) - Moral Judgments in Online Discourse are not Biased by Gender [3.2771631221674333]
We use data from r/AITA, a Reddit community with 17 million members who share first-hand experiences seeking community judgment on their behavior.
We find no direct causal effect of the protagonist's gender on the received moral judgments.
Our findings complement existing correlational studies and suggest that gender roles may exert greater influence in specific social contexts.
arXiv Detail & Related papers (2024-08-23T07:10:48Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Challenging Negative Gender Stereotypes: A Study on the Effectiveness of Automated Counter-Stereotypes [12.704072523930444]
This study investigates eleven strategies to automatically counter-act and challenge gender stereotypes in online communications.
We present AI-generated gender-based counter-stereotypes to study participants and ask them to assess their offensiveness, plausibility, and potential effectiveness.
arXiv Detail & Related papers (2024-04-18T01:48:28Z) - Protected group bias and stereotypes in Large Language Models [2.1122940074160357]
This paper investigates the behavior of Large Language Models (LLMs) in the domains of ethics and fairness.
We find bias across minoritized groups, but in particular in the domains of gender and sexuality, as well as Western bias.
arXiv Detail & Related papers (2024-03-21T00:21:38Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Fairness in AI Systems: Mitigating gender bias from language-vision
models [0.913755431537592]
We study the extent of the impact of gender bias in existing datasets.
We propose a methodology to mitigate its impact in caption based language vision models.
arXiv Detail & Related papers (2023-05-03T04:33:44Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - What's Sex Got To Do With Fair Machine Learning? [0.0]
We argue that many approaches to "fairness" require one to specify a causal model of the data generating process.
We show this by exploring the formal assumption of modularity in causal models.
We argue that this ontological picture is false. Many of the "effects" that sex purportedly "causes" are in fact features of sex as a social status.
arXiv Detail & Related papers (2020-06-02T16:51:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.