Disability Representations: Finding Biases in Automatic Image Generation
- URL: http://arxiv.org/abs/2406.14993v1
- Date: Fri, 21 Jun 2024 09:12:31 GMT
- Title: Disability Representations: Finding Biases in Automatic Image Generation
- Authors: Yannis Tevissen,
- Abstract summary: This study investigates the representation biases in popular image generation models towards people with disabilities (PWD)
The results indicate a significant bias, with most generated images portraying disabled individuals as old, sad, and predominantly using manual wheelchairs.
These findings highlight the urgent need for more inclusive AI development, ensuring diverse and accurate representation of PWD in generated images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in image generation technology have enabled widespread access to AI-generated imagery, prominently used in advertising, entertainment, and progressively in every form of visual content. However, these technologies often perpetuate societal biases. This study investigates the representation biases in popular image generation models towards people with disabilities (PWD). Through a comprehensive experiment involving several popular text-to-image models, we analyzed the depiction of disability. The results indicate a significant bias, with most generated images portraying disabled individuals as old, sad, and predominantly using manual wheelchairs. These findings highlight the urgent need for more inclusive AI development, ensuring diverse and accurate representation of PWD in generated images. This research underscores the importance of addressing and mitigating biases in AI models to foster equitable and realistic representations.
Related papers
- Using complex prompts to identify fine-grained biases in image generation through ChatGPT-4o [0.0]
Two dimensions of bias can be revealed through the study of large AI models.
Not only bias in training data or the products of an AI, but also bias in society.
I briefly discuss how we can use complex prompts to image generation AI to investigate either dimension of bias.
arXiv Detail & Related papers (2025-04-01T03:17:35Z) - Exploring Bias in over 100 Text-to-Image Generative Models [49.60774626839712]
We investigate bias trends in text-to-image generative models over time, focusing on the increasing availability of models through open platforms like Hugging Face.
We assess bias across three key dimensions: (i) distribution bias, (ii) generative hallucination, and (iii) generative miss-rate.
Our findings indicate that artistic and style-transferred models exhibit significant bias, whereas foundation models, benefiting from broader training distributions, are becoming progressively less biased.
arXiv Detail & Related papers (2025-03-11T03:40:44Z) - When Does Perceptual Alignment Benefit Vision Representations? [76.32336818860965]
We investigate how aligning vision model representations to human perceptual judgments impacts their usability.
We find that aligning models to perceptual judgments yields representations that improve upon the original backbones across many downstream tasks.
Our results suggest that injecting an inductive bias about human perceptual knowledge into vision models can contribute to better representations.
arXiv Detail & Related papers (2024-10-14T17:59:58Z) - Illustrating Classic Brazilian Books using a Text-To-Image Diffusion Model [0.4374837991804086]
Latent Diffusion Models (LDMs) signifies a paradigm shift in the domain of AI capabilities.
This article delves into the feasibility of employing the Stable Diffusion LDM to illustrate literary works.
arXiv Detail & Related papers (2024-08-01T13:28:15Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Towards the Detection of AI-Synthesized Human Face Images [12.090322373964124]
This paper presents a benchmark including human face images produced by Generative Adversarial Networks (GANs) and a variety of DMs.
Then, the forgery traces introduced by different generative models have been analyzed in the frequency domain to draw various insights.
The paper further demonstrates that a detector trained with frequency representation can generalize well to other unseen generative models.
arXiv Detail & Related papers (2024-02-13T19:37:44Z) - New Job, New Gender? Measuring the Social Bias in Image Generation Models [85.26441602999014]
Image generation models are susceptible to generating content that perpetuates social stereotypes and biases.
We propose BiasPainter, a framework that can accurately, automatically and comprehensively trigger social bias in image generation models.
BiasPainter can achieve 90.8% accuracy on automatic bias detection, which is significantly higher than the results reported in previous work.
arXiv Detail & Related papers (2024-01-01T14:06:55Z) - Exploring Social Bias in Downstream Applications of Text-to-Image
Foundation Models [72.06006736916821]
We use synthetic images to probe two applications of text-to-image models, image editing and classification, for social bias.
Using our methodology, we uncover meaningful and significant inter-sectional social biases in textitStable Diffusion, a state-of-the-art open-source text-to-image model.
Our findings caution against the uninformed adoption of text-to-image foundation models for downstream tasks and services.
arXiv Detail & Related papers (2023-12-05T14:36:49Z) - TIBET: Identifying and Evaluating Biases in Text-to-Image Generative Models [22.076898042211305]
We propose a general approach to study and quantify a broad spectrum of biases, for any TTI model and for any prompt.
Our approach automatically identifies potential biases that might be relevant to the given prompt, and measures those biases.
We show that our method is uniquely capable of explaining complex multi-dimensional biases through semantic concepts.
arXiv Detail & Related papers (2023-12-03T02:31:37Z) - Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images [67.18010640829682]
We show that AI-generated images introduce an invisible relevance bias to text-image retrieval models.
The inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias.
We propose an effective training method aimed at alleviating the invisible relevance bias.
arXiv Detail & Related papers (2023-11-23T16:22:58Z) - Social Biases through the Text-to-Image Generation Lens [9.137275391251517]
Text-to-Image (T2I) generation is enabling new applications that support creators, designers, and general end users of productivity software.
We take a multi-dimensional approach to studying and quantifying common social biases as reflected in the generated images.
We present findings for two popular T2I models: DALLE-v2 and Stable Diffusion.
arXiv Detail & Related papers (2023-03-30T05:29:13Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.