A Taxonomy of the Biases of the Images created by Generative Artificial Intelligence
- URL: http://arxiv.org/abs/2407.01556v1
- Date: Thu, 2 May 2024 22:01:28 GMT
- Title: A Taxonomy of the Biases of the Images created by Generative Artificial Intelligence
- Authors: Adriana Fernández de Caleya Vázquez, Eduardo C. Garrido-Merchán,
- Abstract summary: Generative artificial intelligence models show an amazing performance creating unique content automatically just by being given a prompt by the user.
We analyze in detail how the generated content by these models can be strongly biased with respect to a plethora of variables.
We discuss the social, political and economical implications of these biases and possible ways to mitigate them.
- Score: 2.0257616108612373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative artificial intelligence models show an amazing performance creating unique content automatically just by being given a prompt by the user, which is revolutionizing several fields such as marketing and design. Not only are there models whose generated output belongs to the text format but we also find models that are able to automatically generate high quality genuine images and videos given a prompt. Although the performance in image creation seems impressive, it is necessary to slowly assess the content that these models are generating, as the users are uploading massively this material on the internet. Critically, it is important to remark that generative AI are statistical models whose parameter values are estimated given algorithms that maximize the likelihood of the parameters given an image dataset. Consequently, if the image dataset is biased towards certain values for vulnerable variables such as gender or skin color, we might find that the generated content of these models can be harmful for certain groups of people. By generating this content and being uploaded into the internet by users, these biases are perpetuating harmful stereotypes for vulnerable groups, polarizing social vision about, for example, what beauty or disability is and means. In this work, we analyze in detail how the generated content by these models can be strongly biased with respect to a plethora of variables, which we organize into a new image generative AI taxonomy. We also discuss the social, political and economical implications of these biases and possible ways to mitigate them.
Related papers
- KITTEN: A Knowledge-Intensive Evaluation of Image Generation on Visual Entities [93.74881034001312]
We conduct a systematic study on the fidelity of entities in text-to-image generation models.
We focus on their ability to generate a wide range of real-world visual entities, such as landmark buildings, aircraft, plants, and animals.
Our findings reveal that even the most advanced text-to-image models often fail to generate entities with accurate visual details.
arXiv Detail & Related papers (2024-10-15T17:50:37Z) - Analyzing Quality, Bias, and Performance in Text-to-Image Generative Models [0.0]
Despite advances in generative models, most studies ignore the presence of bias.
In this paper, we examine several text-to-image models not only by qualitatively assessing their performance in generating accurate images of human faces, groups, and specified numbers of objects but also by presenting a social bias analysis.
As expected, models with larger capacity generate higher-quality images. However, we also document the inherent gender or social biases these models possess, offering a more complete understanding of their impact and limitations.
arXiv Detail & Related papers (2024-06-28T14:10:42Z) - Improving face generation quality and prompt following with synthetic captions [57.47448046728439]
We introduce a training-free pipeline designed to generate accurate appearance descriptions from images of people.
We then use these synthetic captions to fine-tune a text-to-image diffusion model.
Our results demonstrate that this approach significantly improves the model's ability to generate high-quality, realistic human faces.
arXiv Detail & Related papers (2024-05-17T15:50:53Z) - Would Deep Generative Models Amplify Bias in Future Models? [29.918422914275226]
We investigate the impact of deep generative models on potential social biases in upcoming computer vision models.
We conduct simulations by substituting original images in COCO and CC3M datasets with images generated through Stable Diffusion.
Contrary to expectations, our findings indicate that introducing generated images during training does not uniformly amplify bias.
arXiv Detail & Related papers (2024-04-04T06:58:39Z) - Invisible Relevance Bias: Text-Image Retrieval Models Prefer AI-Generated Images [67.18010640829682]
We show that AI-generated images introduce an invisible relevance bias to text-image retrieval models.
The inclusion of AI-generated images in the training data of the retrieval models exacerbates the invisible relevance bias.
We propose an effective training method aimed at alleviating the invisible relevance bias.
arXiv Detail & Related papers (2023-11-23T16:22:58Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models [54.19289900203071]
The rise in popularity of text-to-image generative artificial intelligence has attracted widespread public interest.
We demonstrate that this technology can be attacked to generate content that subtly manipulates its users.
We propose a Backdoor Attack on text-to-image Generative Models (BAGM)
Our attack is the first to target three popular text-to-image generative models across three stages of the generative process.
arXiv Detail & Related papers (2023-07-31T08:34:24Z) - Evaluating Data Attribution for Text-to-Image Models [62.844382063780365]
We evaluate attribution through "customization" methods, which tune an existing large-scale model toward a given exemplar object or style.
Our key insight is that this allows us to efficiently create synthetic images that are computationally influenced by the exemplar by construction.
By taking into account the inherent uncertainty of the problem, we can assign soft attribution scores over a set of training images.
arXiv Detail & Related papers (2023-06-15T17:59:51Z) - Mitigating Inappropriateness in Image Generation: Can there be Value in
Reflecting the World's Ugliness? [18.701950647429]
We demonstrate inappropriate degeneration on a large-scale for various generative text-to-image models.
We use models' representations of the world's ugliness to align them with human preferences.
arXiv Detail & Related papers (2023-05-28T13:35:50Z) - Will Large-scale Generative Models Corrupt Future Datasets? [5.593352892211305]
Large-scale text-to-image generative models can generate high-quality and realistic images from users' prompts.
This paper empirically answers this question by simulating contamination.
We conclude that generated images negatively affect downstream performance, while the significance depends on tasks and the amount of generated images.
arXiv Detail & Related papers (2022-11-15T12:25:33Z) - Inferring Offensiveness In Images From Natural Language Supervision [20.294073012815854]
Large image datasets automatically scraped from the web may contain derogatory terms as categories and offensive images.
We show that pre-trained transformers themselves provide a methodology for the automated curation of large-scale vision datasets.
arXiv Detail & Related papers (2021-10-08T16:19:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.