SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage
Leveraging Generative Models
- URL: http://arxiv.org/abs/2305.11840v1
- Date: Fri, 19 May 2023 17:30:19 GMT
- Title: SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage
Leveraging Generative Models
- Authors: Akshita Jha, Aida Davani, Chandan K. Reddy, Shachi Dave, Vinodkumar
Prabhakaran, Sunipa Dev
- Abstract summary: SeeGULL is a broad-coverage stereotype dataset in English.
It contains stereotypes about identity groups spanning 178 countries across 8 different geo-political regions across 6 continents.
We also include fine-grained offensiveness scores for different stereotypes and demonstrate their global disparities.
- Score: 15.145145928670827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stereotype benchmark datasets are crucial to detect and mitigate social
stereotypes about groups of people in NLP models. However, existing datasets
are limited in size and coverage, and are largely restricted to stereotypes
prevalent in the Western society. This is especially problematic as language
technologies gain hold across the globe. To address this gap, we present
SeeGULL, a broad-coverage stereotype dataset, built by utilizing generative
capabilities of large language models such as PaLM, and GPT-3, and leveraging a
globally diverse rater pool to validate the prevalence of those stereotypes in
society. SeeGULL is in English, and contains stereotypes about identity groups
spanning 178 countries across 8 different geo-political regions across 6
continents, as well as state-level identities within the US and India. We also
include fine-grained offensiveness scores for different stereotypes and
demonstrate their global disparities. Furthermore, we include comparative
annotations about the same groups by annotators living in the region vs. those
that are based in North America, and demonstrate that within-region stereotypes
about groups differ from those prevalent in North America. CONTENT WARNING:
This paper contains stereotype examples that may be offensive.
Related papers
- Who is better at math, Jenny or Jingzhen? Uncovering Stereotypes in Large Language Models [9.734705470760511]
We use GlobalBias to study a broad set of stereotypes from around the world.
We generate character profiles based on given names and evaluate the prevalence of stereotypes in model outputs.
arXiv Detail & Related papers (2024-07-09T14:52:52Z) - The Factuality Tax of Diversity-Intervened Text-to-Image Generation: Benchmark and Fact-Augmented Intervention [61.80236015147771]
We quantify the trade-off between using diversity interventions and preserving demographic factuality in T2I models.
Experiments on DoFaiR reveal that diversity-oriented instructions increase the number of different gender and racial groups.
We propose Fact-Augmented Intervention (FAI) to reflect on verbalized or retrieved factual information about gender and racial compositions of generation subjects in history.
arXiv Detail & Related papers (2024-06-29T09:09:42Z) - SeeGULL Multilingual: a Dataset of Geo-Culturally Situated Stereotypes [18.991295993710224]
SeeGULL is a global-scale multilingual dataset of social stereotypes, spanning 20 languages, with human annotations across 23 regions, and demonstrate its utility in identifying gaps in model evaluations.
arXiv Detail & Related papers (2024-03-08T22:09:58Z) - ViSAGe: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation [24.862839173648467]
We introduce the ViSAGe dataset to enable the evaluation of nationality-based stereotypes in T2I models.
We show that stereotypical attributes in ViSAGe are thrice as likely to be present in generated images of corresponding identities as compared to other attributes.
arXiv Detail & Related papers (2024-01-12T00:43:57Z) - Building Socio-culturally Inclusive Stereotype Resources with Community
Engagement [9.131536842607069]
We demonstrate a socio-culturally aware expansion of evaluation resources in the Indian societal context, specifically for the harm of stereotyping.
The resultant resource increases the number of stereotypes known for and in the Indian context by over 1000 stereotypes across many unique identities.
arXiv Detail & Related papers (2023-07-20T01:26:34Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning [49.04866469947569]
We construct a Geo-Diverse Visual Commonsense Reasoning dataset (GD-VCR) to test vision-and-language models' ability to understand cultural and geo-location-specific commonsense.
We find that the performance of both models for non-Western regions including East Asia, South Asia, and Africa is significantly lower than that for Western region.
arXiv Detail & Related papers (2021-09-14T17:52:55Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - How True is GPT-2? An Empirical Analysis of Intersectional Occupational
Biases [50.591267188664666]
Downstream applications are at risk of inheriting biases contained in natural language models.
We analyze the occupational biases of a popular generative language model, GPT-2.
For a given job, GPT-2 reflects the societal skew of gender and ethnicity in the US, and in some cases, pulls the distribution towards gender parity.
arXiv Detail & Related papers (2021-02-08T11:10:27Z) - One Label, One Billion Faces: Usage and Consistency of Racial Categories
in Computer Vision [75.82110684355979]
We study the racial system encoded by computer vision datasets supplying categorical race labels for face images.
We find that each dataset encodes a substantially unique racial system, despite nominally equivalent racial categories.
We find evidence that racial categories encode stereotypes, and exclude ethnic groups from categories on the basis of nonconformity to stereotypes.
arXiv Detail & Related papers (2021-02-03T22:50:04Z) - CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked
Language Models [30.582132471411263]
We introduce the Crowd Stereotype Pairs benchmark (CrowS-Pairs)
CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age.
We find that all three of the widely-used sentences we evaluate substantially favor stereotypes in every category in CrowS-Pairs.
arXiv Detail & Related papers (2020-09-30T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.