Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale
- URL: http://arxiv.org/abs/2211.03759v2
- Date: Wed, 7 Jun 2023 16:36:13 GMT
- Title: Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale
- Authors: Federico Bianchi, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra
Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin
Caliskan
- Abstract summary: We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
- Score: 61.555788332182395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models that convert user-written text descriptions into
images are now widely available online and used by millions of users to
generate millions of images a day. We investigate the potential for these
models to amplify dangerous and complex stereotypes. We find a broad range of
ordinary prompts produce stereotypes, including prompts simply mentioning
traits, descriptors, occupations, or objects. For example, we find cases of
prompting for basic traits or social roles resulting in images reinforcing
whiteness as ideal, prompting for occupations resulting in amplification of
racial and gender disparities, and prompting for objects resulting in
reification of American norms. Stereotypes are present regardless of whether
prompts explicitly mention identity and demographic language or avoid such
language. Moreover, stereotypes persist despite mitigation strategies; neither
user attempts to counter stereotypes by requesting images with specific
counter-stereotypes nor institutional attempts to add system ``guardrails''
have prevented the perpetuation of stereotypes. Our analysis justifies concerns
regarding the impacts of today's models, presenting striking exemplars, and
connecting these findings with deep insights into harms drawn from social
scientific and humanist disciplines. This work contributes to the effort to
shed light on the uniquely complex biases in language-vision models and
demonstrates the ways that the mass deployment of text-to-image generation
models results in mass dissemination of stereotypes and resulting harms.
Related papers
- Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models [50.40276881893513]
This study introduces Spoken Stereoset, a dataset specifically designed to evaluate social biases in Speech Large Language Models (SLLMs)
By examining how different models respond to speech from diverse demographic groups, we aim to identify these biases.
The findings indicate that while most models show minimal bias, some still exhibit slightly stereotypical or anti-stereotypical tendencies.
arXiv Detail & Related papers (2024-08-14T16:55:06Z) - Who is better at math, Jenny or Jingzhen? Uncovering Stereotypes in Large Language Models [9.734705470760511]
We use GlobalBias to study a broad set of stereotypes from around the world.
We generate character profiles based on given names and evaluate the prevalence of stereotypes in model outputs.
arXiv Detail & Related papers (2024-07-09T14:52:52Z) - Language Agents for Detecting Implicit Stereotypes in Text-to-image
Models at Scale [45.64096601242646]
We introduce a novel agent architecture tailored for stereotype detection in text-to-image models.
We build the stereotype-relevant benchmark based on multiple open-text datasets.
We find that these models often display serious stereotypes when it comes to certain prompts about personal characteristics.
arXiv Detail & Related papers (2023-10-18T08:16:29Z) - Word-Level Explanations for Analyzing Bias in Text-to-Image Models [72.71184730702086]
Text-to-image (T2I) models can generate images that underrepresent minorities based on race and sex.
This paper investigates which word in the input prompt is responsible for bias in generated images.
arXiv Detail & Related papers (2023-06-03T21:39:07Z) - Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models [6.92043136971035]
We investigate how multimodal models handle diverse gender identities.
We find certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised.
These improvements could pave the way for a future where change is led by the affected community.
arXiv Detail & Related papers (2023-05-26T16:28:49Z) - Social Biases through the Text-to-Image Generation Lens [9.137275391251517]
Text-to-Image (T2I) generation is enabling new applications that support creators, designers, and general end users of productivity software.
We take a multi-dimensional approach to studying and quantifying common social biases as reflected in the generated images.
We present findings for two popular T2I models: DALLE-v2 and Stable Diffusion.
arXiv Detail & Related papers (2023-03-30T05:29:13Z) - A Prompt Array Keeps the Bias Away: Debiasing Vision-Language Models
with Adversarial Learning [55.96577490779591]
Vision-language models can encode societal biases and stereotypes.
There are challenges to measuring and mitigating these multimodal harms.
We investigate bias measures and apply ranking metrics for image-text representations.
arXiv Detail & Related papers (2022-03-22T17:59:04Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - Understanding and Countering Stereotypes: A Computational Approach to
the Stereotype Content Model [4.916009028580767]
We present a computational approach to interpreting stereotypes in text through the Stereotype Content Model (SCM)
The SCM proposes that stereotypes can be understood along two primary dimensions: warmth and competence.
It is known that countering stereotypes with anti-stereotypical examples is one of the most effective ways to reduce biased thinking.
arXiv Detail & Related papers (2021-06-04T16:53:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.