How True is GPT-2? An Empirical Analysis of Intersectional Occupational
Biases
- URL: http://arxiv.org/abs/2102.04130v1
- Date: Mon, 8 Feb 2021 11:10:27 GMT
- Title: How True is GPT-2? An Empirical Analysis of Intersectional Occupational
Biases
- Authors: Hannah Kirk, Yennie Jun, Haider Iqbal, Elias Benussi, Filippo Volpin,
Frederic A. Dreyer, Aleksandar Shtedritski, Yuki M. Asano
- Abstract summary: Downstream applications are at risk of inheriting biases contained in natural language models.
We analyze the occupational biases of a popular generative language model, GPT-2.
For a given job, GPT-2 reflects the societal skew of gender and ethnicity in the US, and in some cases, pulls the distribution towards gender parity.
- Score: 50.591267188664666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The capabilities of natural language models trained on large-scale data have
increased immensely over the past few years. Downstream applications are at
risk of inheriting biases contained in these models, with potential negative
consequences especially for marginalized groups. In this paper, we analyze the
occupational biases of a popular generative language model, GPT-2, intersecting
gender with five protected categories: religion, sexuality, ethnicity,
political affiliation, and name origin. Using a novel data collection pipeline
we collect 396k sentence completions of GPT-2 and find: (i) The
machine-predicted jobs are less diverse and more stereotypical for women than
for men, especially for intersections; (ii) Fitting 262 logistic models shows
intersectional interactions to be highly relevant for occupational
associations; (iii) For a given job, GPT-2 reflects the societal skew of gender
and ethnicity in the US, and in some cases, pulls the distribution towards
gender parity, raising the normative question of what language models _should_
learn.
Related papers
- Evaluating Gender Bias in Large Language Models [0.8636148452563583]
The study examines the extent to which Large Language Models (LLMs) exhibit gender bias in pronoun selection in occupational contexts.
The jobs considered include a range of occupations, from those with a significant male presence to those with a notable female concentration.
The results show a positive correlation between the models' pronoun choices and the gender distribution present in U.S. labor force data.
arXiv Detail & Related papers (2024-11-14T22:23:13Z) - The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - GenderBias-\emph{VL}: Benchmarking Gender Bias in Vision Language Models via Counterfactual Probing [72.0343083866144]
This paper introduces the GenderBias-emphVL benchmark to evaluate occupation-related gender bias in Large Vision-Language Models.
Using our benchmark, we extensively evaluate 15 commonly used open-source LVLMs and state-of-the-art commercial APIs.
Our findings reveal widespread gender biases in existing LVLMs.
arXiv Detail & Related papers (2024-06-30T05:55:15Z) - Hire Me or Not? Examining Language Model's Behavior with Occupation Attributes [7.718858707298602]
Large language models (LLMs) have been widely integrated into production pipelines, like recruitment and recommendation systems.
This paper investigates LLMs' behavior with respect to gender stereotypes, in the context of occupation decision making.
arXiv Detail & Related papers (2024-05-06T18:09:32Z) - Protected group bias and stereotypes in Large Language Models [2.1122940074160357]
This paper investigates the behavior of Large Language Models (LLMs) in the domains of ethics and fairness.
We find bias across minoritized groups, but in particular in the domains of gender and sexuality, as well as Western bias.
arXiv Detail & Related papers (2024-03-21T00:21:38Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You [64.74707085021858]
We show that multilingual models suffer from significant gender biases just as monolingual models do.
We propose a novel benchmark, MAGBIG, intended to foster research on gender bias in multilingual models.
Our results show that not only do models exhibit strong gender biases but they also behave differently across languages.
arXiv Detail & Related papers (2024-01-29T12:02:28Z) - Evaluating Large Language Models through Gender and Racial Stereotypes [0.0]
We conduct a quality comparative study and establish a framework to evaluate language models under the premise of two kinds of biases: gender and race.
We find out that while gender bias has reduced immensely in newer models, as compared to older ones, racial bias still exists.
arXiv Detail & Related papers (2023-11-24T18:41:16Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.