AI-generated faces influence gender stereotypes and racial homogenization
- URL: http://arxiv.org/abs/2402.01002v2
- Date: Fri, 10 May 2024 09:42:57 GMT
- Title: AI-generated faces influence gender stereotypes and racial homogenization
- Authors: Nouar AlDahoul, Talal Rahwan, Yasir Zaki,
- Abstract summary: We document significant biases in Stable Diffusion across six races, two genders, 32 professions, and eight attributes.
This analysis reveals significant racial homogenization depicting nearly all middle eastern men as dark-skinned, bearded, and wearing a traditional headdress.
Using a preregistered experiment, we show that being presented with inclusive AI-generated faces reduces people's racial and gender biases, while being presented with non-inclusive ones increases such biases.
- Score: 1.6647208383676708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text-to-image generative AI models such as Stable Diffusion are used daily by millions worldwide. However, the extent to which these models exhibit racial and gender stereotypes is not yet fully understood. Here, we document significant biases in Stable Diffusion across six races, two genders, 32 professions, and eight attributes. Additionally, we examine the degree to which Stable Diffusion depicts individuals of the same race as being similar to one another. This analysis reveals significant racial homogenization, e.g., depicting nearly all middle eastern men as dark-skinned, bearded, and wearing a traditional headdress. We then propose novel debiasing solutions that address the above stereotypes. Finally, using a preregistered experiment, we show that being presented with inclusive AI-generated faces reduces people's racial and gender biases, while being presented with non-inclusive ones increases such biases. This persists regardless of whether the images are labeled as AI-generated. Taken together, our findings emphasize the need to address biases and stereotypes in AI-generated content.
Related papers
- MoESD: Mixture of Experts Stable Diffusion to Mitigate Gender Bias [23.10522891268232]
We introduce a Mixture-of-Experts approach to mitigate gender bias in text-to-image models.
We show that our approach successfully mitigates gender bias while maintaining image quality.
arXiv Detail & Related papers (2024-06-25T14:59:31Z) - Inclusive content reduces racial and gender biases, yet non-inclusive content dominates popular media outlets [1.4204016278692333]
This study examines the manner in which racial and gender groups are portrayed in popular media imagery.
We collect over 300,000 images spanning over five decades and utilize state-of-the-art machine learning models.
We find that racial minorities appear far less frequently than their White counterparts.
We also find that women are more likely to be portrayed with their full bodies in images, whereas men are more frequently presented with their faces.
arXiv Detail & Related papers (2024-05-10T11:34:47Z) - Bias in Generative AI [2.5830293457323266]
This study analyzed images generated by three popular generative artificial intelligence (AI) tools to investigate potential bias in AI generators.
All three AI generators exhibited bias against women and African Americans.
Women were depicted as younger with more smiles and happiness, while men were depicted as older with more neutral expressions and anger.
arXiv Detail & Related papers (2024-03-05T07:34:41Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models [6.92043136971035]
We investigate how multimodal models handle diverse gender identities.
We find certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised.
These improvements could pave the way for a future where change is led by the affected community.
arXiv Detail & Related papers (2023-05-26T16:28:49Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - How well can Text-to-Image Generative Models understand Ethical Natural
Language Interventions? [67.97752431429865]
We study the effect on the diversity of the generated images when adding ethical intervention.
Preliminary studies indicate that a large change in the model predictions is triggered by certain phrases such as 'irrespective of gender'
arXiv Detail & Related papers (2022-10-27T07:32:39Z) - Gender Stereotyping Impact in Facial Expression Recognition [1.5340540198612824]
In recent years, machine learning-based models have become the most popular approach to Facial Expression Recognition (FER)
In publicly available FER datasets, apparent gender representation is usually mostly balanced, but their representation in the individual label is not.
We generate derivative datasets with different amounts of stereotypical bias by altering the gender proportions of certain labels.
We observe a discrepancy in the recognition of certain emotions between genders of up to $29 %$ under the worst bias conditions.
arXiv Detail & Related papers (2022-10-11T10:52:23Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.