She Works, He Works: A Curious Exploration of Gender Bias in AI-Generated Imagery
- URL: http://arxiv.org/abs/2407.18524v1
- Date: Fri, 26 Jul 2024 05:56:18 GMT
- Title: She Works, He Works: A Curious Exploration of Gender Bias in AI-Generated Imagery
- Authors: Amalia Foka,
- Abstract summary: This paper examines gender bias in AI-generated imagery of construction workers, highlighting discrepancies in the portrayal of male and female figures.
Grounded in Griselda Pollock's theories on visual culture and gender, the analysis reveals that AI models tend to sexualize female figures while portraying male figures as more authoritative and competent.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper examines gender bias in AI-generated imagery of construction workers, highlighting discrepancies in the portrayal of male and female figures. Grounded in Griselda Pollock's theories on visual culture and gender, the analysis reveals that AI models tend to sexualize female figures while portraying male figures as more authoritative and competent. These findings underscore AI's potential to mirror and perpetuate societal biases, emphasizing the need for critical engagement with AI-generated content. The project contributes to discussions on the ethical implications of AI in creative practices and its broader impact on cultural perceptions of gender.
Related papers
- Human Bias in the Face of AI: The Role of Human Judgement in AI Generated Text Evaluation [48.70176791365903]
This study explores how bias shapes the perception of AI versus human generated content.
We investigated how human raters respond to labeled and unlabeled content.
arXiv Detail & Related papers (2024-09-29T04:31:45Z) - Artificial Intelligence (AI) Onto-norms and Gender Equality: Unveiling the Invisible Gender Norms in AI Ecosystems in the Context of Africa [3.7498611358320733]
The study examines how ontonorms propagate certain gender practices in digital spaces through character and the norms of spaces that shape AI design, training and use.
By examining how data and content can knowingly or unknowingly be used to drive certain social norms in the AI ecosystems, this study argues that ontonorms shape how AI engages with the content that relates to women.
arXiv Detail & Related papers (2024-08-22T22:54:02Z) - Thinking beyond Bias: Analyzing Multifaceted Impacts and Implications of AI on Gendered Labour [1.5839621757142595]
This paper emphasizes the need to explore AIs broader impacts on gendered labor.
We draw attention to how the AI industry as an integral component of the larger economic structure is transforming the nature of work.
arXiv Detail & Related papers (2024-06-23T20:09:53Z) - A multitask learning framework for leveraging subjectivity of annotators to identify misogyny [47.175010006458436]
We propose a multitask learning approach to enhance the performance of the misogyny identification systems.
We incorporated diverse perspectives from annotators in our model design, considering gender and age across six profile groups.
This research advances content moderation and highlights the importance of embracing diverse perspectives to build effective online moderation systems.
arXiv Detail & Related papers (2024-06-22T15:06:08Z) - Bias in Generative AI [2.5830293457323266]
This study analyzed images generated by three popular generative artificial intelligence (AI) tools to investigate potential bias in AI generators.
All three AI generators exhibited bias against women and African Americans.
Women were depicted as younger with more smiles and happiness, while men were depicted as older with more neutral expressions and anger.
arXiv Detail & Related papers (2024-03-05T07:34:41Z) - The Male CEO and the Female Assistant: Evaluation and Mitigation of Gender Biases in Text-To-Image Generation of Dual Subjects [58.27353205269664]
We propose the Paired Stereotype Test (PST) framework, which queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
PST queries T2I models to depict two individuals assigned with male-stereotyped and female-stereotyped social identities.
Using PST, we evaluate two aspects of gender biases -- the well-known bias in gendered occupation and a novel aspect: bias in organizational power.
arXiv Detail & Related papers (2024-02-16T21:32:27Z) - Will the Prince Get True Love's Kiss? On the Model Sensitivity to Gender
Perturbation over Fairytale Texts [87.62403265382734]
Recent studies show that traditional fairytales are rife with harmful gender biases.
This work aims to assess learned biases of language models by evaluating their robustness against gender perturbations.
arXiv Detail & Related papers (2023-10-16T22:25:09Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Voices of Her: Analyzing Gender Differences in the AI Publication World [26.702520904075044]
We identify several gender differences using the AI Scholar dataset of 78K researchers in the field of AI.
Female first-authored papers show distinct linguistic styles, such as longer text, more positive emotion words, and more catchy titles.
Our analysis provides a window into the current demographic trends in our AI community, and encourages more gender equality and diversity in the future.
arXiv Detail & Related papers (2023-05-24T00:40:49Z) - Smiling Women Pitching Down: Auditing Representational and
Presentational Gender Biases in Image Generative AI [0.6990493129893111]
We examined the prevalence of two occupational gender biases in 15,300 DALL-E 2 images spanning 153 occupations.
DALL-E 2 underrepresents women in male-dominated fields while overrepresenting them in female-dominated occupations.
Our computational algorithm auditing study demonstrates more pronounced representational and presentational biases in DALL-E 2 compared to Google Images.
arXiv Detail & Related papers (2023-05-17T20:59:10Z) - How well can Text-to-Image Generative Models understand Ethical Natural
Language Interventions? [67.97752431429865]
We study the effect on the diversity of the generated images when adding ethical intervention.
Preliminary studies indicate that a large change in the model predictions is triggered by certain phrases such as 'irrespective of gender'
arXiv Detail & Related papers (2022-10-27T07:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.