Stable Bias: Analyzing Societal Representations in Diffusion Models
- URL: http://arxiv.org/abs/2303.11408v2
- Date: Thu, 9 Nov 2023 22:37:29 GMT
- Title: Stable Bias: Analyzing Societal Representations in Diffusion Models
- Authors: Alexandra Sasha Luccioni, Christopher Akiki, Margaret Mitchell, Yacine
Jernite
- Abstract summary: We propose a new method for exploring the social biases in Text-to-Image (TTI) systems.
Our approach relies on characterizing the variation in generated images triggered by enumerating gender and ethnicity markers in the prompts.
We leverage this method to analyze images generated by 3 popular TTI systems and find that while all of their outputs show correlations with US labor demographics, they also consistently under-represent marginalized identities to different extents.
- Score: 72.27121528451528
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As machine learning-enabled Text-to-Image (TTI) systems are becoming
increasingly prevalent and seeing growing adoption as commercial services,
characterizing the social biases they exhibit is a necessary first step to
lowering their risk of discriminatory outcomes. This evaluation, however, is
made more difficult by the synthetic nature of these systems' outputs: common
definitions of diversity are grounded in social categories of people living in
the world, whereas the artificial depictions of fictive humans created by these
systems have no inherent gender or ethnicity. To address this need, we propose
a new method for exploring the social biases in TTI systems. Our approach
relies on characterizing the variation in generated images triggered by
enumerating gender and ethnicity markers in the prompts, and comparing it to
the variation engendered by spanning different professions. This allows us to
(1) identify specific bias trends, (2) provide targeted scores to directly
compare models in terms of diversity and representation, and (3) jointly model
interdependent social variables to support a multidimensional analysis. We
leverage this method to analyze images generated by 3 popular TTI systems
(Dall-E 2, Stable Diffusion v 1.4 and 2) and find that while all of their
outputs show correlations with US labor demographics, they also consistently
under-represent marginalized identities to different extents. We also release
the datasets and low-code interactive bias exploration platforms developed for
this work, as well as the necessary tools to similarly evaluate additional TTI
systems.
Related papers
- The Factuality Tax of Diversity-Intervened Text-to-Image Generation: Benchmark and Fact-Augmented Intervention [61.80236015147771]
We quantify the trade-off between using diversity interventions and preserving demographic factuality in T2I models.
Experiments on DoFaiR reveal that diversity-oriented instructions increase the number of different gender and racial groups.
We propose Fact-Augmented Intervention (FAI), which instructs a Large Language Model (LLM) to reflect on verbalized or retrieved factual information about gender and racial compositions of generation subjects in history.
arXiv Detail & Related papers (2024-06-29T09:09:42Z) - Towards Inclusive Face Recognition Through Synthetic Ethnicity Alteration [11.451395489475647]
We explore ethnicity alteration and skin tone modification using synthetic face image generation methods to increase the diversity of datasets.
We conduct a detailed analysis by first constructing a balanced face image dataset representing three ethnicities: Asian, Black, and Indian.
We then make use of existing Generative Adversarial Network-based (GAN) image-to-image translation and manifold learning models to alter the ethnicity from one to another.
arXiv Detail & Related papers (2024-05-02T13:31:09Z) - Leveraging Prototypical Representations for Mitigating Social Bias without Demographic Information [50.29934517930506]
DAFair is a novel approach to address social bias in language models.
We leverage prototypical demographic texts and incorporate a regularization term during the fine-tuning process to mitigate bias.
arXiv Detail & Related papers (2024-03-14T15:58:36Z) - DSAP: Analyzing Bias Through Demographic Comparison of Datasets [4.8741052091630985]
We propose DSAP (Demographic Similarity from Auxiliary Profiles), a two-step methodology for comparing the demographic composition of two datasets.
DSAP can be deployed in three key applications: to detect and characterize demographic blind spots and bias issues across datasets, to measure dataset demographic bias in single datasets, and to measure dataset demographic shift in deployment scenarios.
An essential feature of DSAP is its ability to robustly analyze datasets without explicit demographic labels, offering simplicity and interpretability for a wide range of situations.
arXiv Detail & Related papers (2023-12-22T11:51:20Z) - Bias and Fairness in Large Language Models: A Survey [73.87651986156006]
We present a comprehensive survey of bias evaluation and mitigation techniques for large language models (LLMs)
We first consolidate, formalize, and expand notions of social bias and fairness in natural language processing.
We then unify the literature by proposing three intuitive, two for bias evaluation, and one for mitigation.
arXiv Detail & Related papers (2023-09-02T00:32:55Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Towards Explaining Demographic Bias through the Eyes of Face Recognition
Models [6.889667606945215]
Biases inherent in both data and algorithms make the fairness of machine learning (ML)-based decision-making systems less than optimal.
We aim at providing a set of explainability tool that analyse the difference in the face recognition models' behaviors when processing different demographic groups.
We do that by leveraging higher-order statistical information based on activation maps to build explainability tools that link the FR models' behavior differences to certain facial regions.
arXiv Detail & Related papers (2022-08-29T07:23:06Z) - Assessing Demographic Bias Transfer from Dataset to Model: A Case Study
in Facial Expression Recognition [1.5340540198612824]
Two metrics focus on the representational and stereotypical bias of the dataset, and the third one on the residual bias of the trained model.
We demonstrate the usefulness of the metrics by applying them to a FER problem based on the popular Affectnet dataset.
arXiv Detail & Related papers (2022-05-20T09:40:42Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.