Public Perceptions of Gender Bias in Large Language Models: Cases of
ChatGPT and Ernie
- URL: http://arxiv.org/abs/2309.09120v1
- Date: Sun, 17 Sep 2023 00:53:34 GMT
- Title: Public Perceptions of Gender Bias in Large Language Models: Cases of
ChatGPT and Ernie
- Authors: Kyrie Zhixuan Zhou, Madelyn Rose Sanfilippo
- Abstract summary: We conducted a content analysis of social media discussions to gauge public perceptions of gender bias in large language models.
People shared both observations of gender bias in their personal use and scientific findings about gender bias in LLMs.
We propose governance recommendations to regulate gender bias in LLMs.
- Score: 2.1756081703276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models are quickly gaining momentum, yet are found to
demonstrate gender bias in their responses. In this paper, we conducted a
content analysis of social media discussions to gauge public perceptions of
gender bias in LLMs which are trained in different cultural contexts, i.e.,
ChatGPT, a US-based LLM, or Ernie, a China-based LLM. People shared both
observations of gender bias in their personal use and scientific findings about
gender bias in LLMs. A difference between the two LLMs was seen -- ChatGPT was
more often found to carry implicit gender bias, e.g., associating men and women
with different profession titles, while explicit gender bias was found in
Ernie's responses, e.g., overly promoting women's pursuit of marriage over
career. Based on the findings, we reflect on the impact of culture on gender
bias and propose governance recommendations to regulate gender bias in LLMs.
Related papers
- Gender Bias in LLM-generated Interview Responses [1.6124402884077915]
This study evaluates three LLMs to conduct a multifaceted audit of LLM-generated interview responses across models, question types, and jobs.
Our findings reveal that gender bias is consistent, and closely aligned with gender stereotypes and the dominance of jobs.
arXiv Detail & Related papers (2024-10-28T05:08:08Z) - GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models [20.98831667981121]
Large Language Models (LLMs) are prone to generating content that exhibits gender biases.
GenderAlign dataset comprises 8k single-turn dialogues, each paired with a "chosen" and a "rejected" response.
Compared to the "rejected" responses, the "chosen" responses demonstrate lower levels of gender bias and higher quality.
arXiv Detail & Related papers (2024-06-20T01:45:44Z) - White Men Lead, Black Women Help? Benchmarking Language Agency Social Biases in LLMs [58.27353205269664]
Social biases can manifest in language agency.
We introduce the novel Language Agency Bias Evaluation benchmark.
We unveil language agency social biases in 3 recent Large Language Model (LLM)-generated content.
arXiv Detail & Related papers (2024-04-16T12:27:54Z) - Gender Bias in Large Language Models across Multiple Languages [10.068466432117113]
We examine gender bias in large language models (LLMs) generated for different languages.
We use three measurements: 1) gender bias in selecting descriptive words given the gender-related context.
2) gender bias in selecting gender-related pronouns (she/he) given the descriptive words.
arXiv Detail & Related papers (2024-03-01T04:47:16Z) - Disclosure and Mitigation of Gender Bias in LLMs [64.79319733514266]
Large Language Models (LLMs) can generate biased responses.
We propose an indirect probing framework based on conditional generation.
We explore three distinct strategies to disclose explicit and implicit gender bias in LLMs.
arXiv Detail & Related papers (2024-02-17T04:48:55Z) - Probing Explicit and Implicit Gender Bias through LLM Conditional Text
Generation [64.79319733514266]
Large Language Models (LLMs) can generate biased and toxic responses.
We propose a conditional text generation mechanism without the need for predefined gender phrases and stereotypes.
arXiv Detail & Related papers (2023-11-01T05:31:46Z) - "Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
LLM-Generated Reference Letters [97.11173801187816]
Large Language Models (LLMs) have recently emerged as an effective tool to assist individuals in writing various types of content.
This paper critically examines gender biases in LLM-generated reference letters.
arXiv Detail & Related papers (2023-10-13T16:12:57Z) - Gender bias and stereotypes in Large Language Models [0.6882042556551611]
This paper investigates Large Language Models' behavior with respect to gender stereotypes.
We use a simple paradigm to test the presence of gender bias, building on but differing from WinoBias.
Our contributions in this paper are as follows: (a) LLMs are 3-6 times more likely to choose an occupation that stereotypically aligns with a person's gender; (b) these choices align with people's perceptions better than with the ground truth as reflected in official job statistics; (d) LLMs ignore crucial ambiguities in sentence structure 95% of the time in our study items, but when explicitly prompted, they recognize
arXiv Detail & Related papers (2023-08-28T22:32:05Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.