Bias in Generative AI
- URL: http://arxiv.org/abs/2403.02726v1
- Date: Tue, 5 Mar 2024 07:34:41 GMT
- Title: Bias in Generative AI
- Authors: Mi Zhou, Vibhanshu Abhishek, Timothy Derdenger, Jaymo Kim, Kannan
Srinivasan
- Abstract summary: This study analyzed images generated by three popular generative artificial intelligence (AI) tools to investigate potential bias in AI generators.
All three AI generators exhibited bias against women and African Americans.
Women were depicted as younger with more smiles and happiness, while men were depicted as older with more neutral expressions and anger.
- Score: 2.5830293457323266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study analyzed images generated by three popular generative artificial
intelligence (AI) tools - Midjourney, Stable Diffusion, and DALLE 2 -
representing various occupations to investigate potential bias in AI
generators. Our analysis revealed two overarching areas of concern in these AI
generators, including (1) systematic gender and racial biases, and (2) subtle
biases in facial expressions and appearances. Firstly, we found that all three
AI generators exhibited bias against women and African Americans. Moreover, we
found that the evident gender and racial biases uncovered in our analysis were
even more pronounced than the status quo when compared to labor force
statistics or Google images, intensifying the harmful biases we are actively
striving to rectify in our society. Secondly, our study uncovered more nuanced
prejudices in the portrayal of emotions and appearances. For example, women
were depicted as younger with more smiles and happiness, while men were
depicted as older with more neutral expressions and anger, posing a risk that
generative AI models may unintentionally depict women as more submissive and
less competent than men. Such nuanced biases, by their less overt nature, might
be more problematic as they can permeate perceptions unconsciously and may be
more difficult to rectify. Although the extent of bias varied depending on the
model, the direction of bias remained consistent in both commercial and
open-source AI generators. As these tools become commonplace, our study
highlights the urgency to identify and mitigate various biases in generative
AI, reinforcing the commitment to ensuring that AI technologies benefit all of
humanity for a more inclusive future.
Related papers
- Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Generative AI Carries Non-Democratic Biases and Stereotypes: Representation of Women, Black Individuals, Age Groups, and People with Disability in AI-Generated Images across Occupations [0.0]
This essay aims to highlight how generative AI includes or excludes equity-deserving groups in its outputs.
The findings reveal that generative AI is not equitably inclusive regarding gender, race, age, and visible disability.
arXiv Detail & Related papers (2024-09-20T19:47:31Z) - Survey of Bias In Text-to-Image Generation: Definition, Evaluation, and Mitigation [47.770531682802314]
Even simple prompts could cause T2I models to exhibit conspicuous social bias in generated images.
We present the first extensive survey on bias in T2I generative models.
We discuss how these works define, evaluate, and mitigate different aspects of bias.
arXiv Detail & Related papers (2024-04-01T10:19:05Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Smiling Women Pitching Down: Auditing Representational and
Presentational Gender Biases in Image Generative AI [0.6990493129893111]
We examined the prevalence of two occupational gender biases in 15,300 DALL-E 2 images spanning 153 occupations.
DALL-E 2 underrepresents women in male-dominated fields while overrepresenting them in female-dominated occupations.
Our computational algorithm auditing study demonstrates more pronounced representational and presentational biases in DALL-E 2 compared to Google Images.
arXiv Detail & Related papers (2023-05-17T20:59:10Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Fairness And Bias in Artificial Intelligence: A Brief Survey of Sources,
Impacts, And Mitigation Strategies [11.323961700172175]
This survey paper offers a succinct, comprehensive overview of fairness and bias in AI.
We review sources of bias, such as data, algorithm, and human decision biases.
We assess the societal impact of biased AI systems, focusing on the perpetuation of inequalities and the reinforcement of harmful stereotypes.
arXiv Detail & Related papers (2023-04-16T03:23:55Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Responsible AI: Gender bias assessment in emotion recognition [6.833826997240138]
This research work aims to study a gender bias in deep learning methods for facial expression recognition.
More biased neural networks show bigger accuracy gap in emotion recognition between male and female test sets.
arXiv Detail & Related papers (2021-03-21T17:00:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.