A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the
Input is Under-Specified?
- URL: http://arxiv.org/abs/2302.07159v1
- Date: Tue, 14 Feb 2023 16:11:06 GMT
- Title: A Friendly Face: Do Text-to-Image Systems Rely on Stereotypes when the
Input is Under-Specified?
- Authors: Kathleen C. Fraser, Svetlana Kiritchenko, and Isar Nejadgholi
- Abstract summary: We investigate properties of images generated in response to prompts which are visually under-specified.
We find that in many cases, images contain similar demographic biases to those reported in the stereotype literature.
- Score: 7.586041161211335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As text-to-image systems continue to grow in popularity with the general
public, questions have arisen about bias and diversity in the generated images.
Here, we investigate properties of images generated in response to prompts
which are visually under-specified, but contain salient social attributes
(e.g., 'a portrait of a threatening person' versus 'a portrait of a friendly
person'). Grounding our work in social cognition theory, we find that in many
cases, images contain similar demographic biases to those reported in the
stereotype literature. However, trends are inconsistent across different models
and further investigation is warranted.
Related papers
- New Job, New Gender? Measuring the Social Bias in Image Generation Models [85.26441602999014]
Image generation models are susceptible to generating content that perpetuates social stereotypes and biases.
We propose BiasPainter, a framework that can accurately, automatically and comprehensively trigger social bias in image generation models.
BiasPainter can achieve 90.8% accuracy on automatic bias detection, which is significantly higher than the results reported in previous work.
arXiv Detail & Related papers (2024-01-01T14:06:55Z) - Exploring Social Bias in Downstream Applications of Text-to-Image
Foundation Models [72.06006736916821]
We use synthetic images to probe two applications of text-to-image models, image editing and classification, for social bias.
Using our methodology, we uncover meaningful and significant inter-sectional social biases in textitStable Diffusion, a state-of-the-art open-source text-to-image model.
Our findings caution against the uninformed adoption of text-to-image foundation models for downstream tasks and services.
arXiv Detail & Related papers (2023-12-05T14:36:49Z) - Probing Intersectional Biases in Vision-Language Models with
Counterfactual Examples [5.870913541790421]
We employ text-to-image diffusion models to produce counterfactual examples for probing intserctional social biases at scale.
Our approach utilizes Stable Diffusion with cross attention control to produce sets of counterfactual image-text pairs.
We conduct extensive experiments using our generated dataset which reveal the intersectional social biases present in state-of-the-art VLMs.
arXiv Detail & Related papers (2023-10-04T17:25:10Z) - T2IAT: Measuring Valence and Stereotypical Biases in Text-to-Image
Generation [11.109588924016254]
We propose a novel Text-to-Image Association Test (T2IAT) framework that quantifies the implicit stereotypes between concepts and images.
We replicate the previously documented bias tests on generative models, including morally neutral tests on flowers and insects.
The results of these experiments demonstrate the presence of complex stereotypical behaviors in image generations.
arXiv Detail & Related papers (2023-06-01T17:02:51Z) - Social Biases through the Text-to-Image Generation Lens [9.137275391251517]
Text-to-Image (T2I) generation is enabling new applications that support creators, designers, and general end users of productivity software.
We take a multi-dimensional approach to studying and quantifying common social biases as reflected in the generated images.
We present findings for two popular T2I models: DALLE-v2 and Stable Diffusion.
arXiv Detail & Related papers (2023-03-30T05:29:13Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - How well can Text-to-Image Generative Models understand Ethical Natural
Language Interventions? [67.97752431429865]
We study the effect on the diversity of the generated images when adding ethical intervention.
Preliminary studies indicate that a large change in the model predictions is triggered by certain phrases such as 'irrespective of gender'
arXiv Detail & Related papers (2022-10-27T07:32:39Z) - DALL-Eval: Probing the Reasoning Skills and Social Biases of
Text-to-Image Generation Models [73.12069620086311]
We investigate the visual reasoning capabilities and social biases of text-to-image models.
First, we measure three visual reasoning skills: object recognition, object counting, and spatial relation understanding.
Second, we assess the gender and skin tone biases by measuring the gender/skin tone distribution of generated images.
arXiv Detail & Related papers (2022-02-08T18:36:52Z) - Image Representations Learned With Unsupervised Pre-Training Contain
Human-like Biases [3.0349733976070015]
We develop a novel method for quantifying biased associations between representations of social concepts and attributes in images.
We find that state-of-the-art unsupervised models trained on ImageNet, a popular benchmark image dataset, automatically learn racial, gender, and intersectional biases.
arXiv Detail & Related papers (2020-10-28T15:55:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.