Inclusive content reduces racial and gender biases, yet non-inclusive content dominates popular culture
- URL: http://arxiv.org/abs/2405.06404v2
- Date: Tue, 19 Nov 2024 10:21:20 GMT
- Title: Inclusive content reduces racial and gender biases, yet non-inclusive content dominates popular culture
- Authors: Nouar AlDahoul, Hazem Ibrahim, Minsu Park, Talal Rahwan, Yasir Zaki,
- Abstract summary: We use state-of-the-art machine learning models to classify over 300,000 images spanning over five decades.
We find that racial minorities appear far less frequently than their White counterparts, and when they do appear, they are portrayed less prominently.
We also find that women are more likely to be portrayed with their full bodies, whereas men are more frequently presented with their faces.
- Score: 1.4204016278692333
- License:
- Abstract: Images are often termed as representations of perceived reality. As such, racial and gender biases in popular culture and visual media could play a critical role in shaping people's perceptions of society. While previous research has made significant progress in exploring the frequency and discrepancies in racial and gender group appearances in visual media, it has largely overlooked important nuances in how these groups are portrayed, as it lacked the ability to systematically capture such complexities at scale over time. To address this gap, we examine two media forms of varying target audiences, namely fashion magazines and movie posters. Accordingly, we collect a large dataset comprising over 300,000 images spanning over five decades and utilize state-of-the-art machine learning models to classify not only race and gender but also the posture, expressed emotional state, and body composition of individuals featured in each image. We find that racial minorities appear far less frequently than their White counterparts, and when they do appear, they are portrayed less prominently. We also find that women are more likely to be portrayed with their full bodies, whereas men are more frequently presented with their faces. Finally, through a series of survey experiments, we find evidence that exposure to inclusive content can help reduce biases in perceptions of minorities, while racially and gender-homogenized content may reinforce and amplify such biases. Taken together, our findings highlight that racial and gender biases in visual media remain pervasive, potentially exacerbating existing stereotypes and inequalities.
Related papers
- A Longitudinal Analysis of Racial and Gender Bias in New York Times and Fox News Images and Articles [2.482116411483087]
We use a dataset of 123,337 images and 441,321 online news articles from New York Times (NYT) and Fox News (Fox)
We examine the frequency and prominence of appearance of racial and gender groups in images embedded in news articles.
We find that NYT largely features more images of racial minority groups compared to Fox.
arXiv Detail & Related papers (2024-10-29T09:42:54Z) - Intertwined Biases Across Social Media Spheres: Unpacking Correlations in Media Bias Dimensions [12.588239777597847]
Media bias significantly shapes public perception by reinforcing stereotypes and exacerbating societal divisions.
We introduce a novel dataset collected from YouTube and Reddit over the past five years.
Our dataset includes automated annotations for YouTube content across a broad spectrum of bias dimensions.
arXiv Detail & Related papers (2024-08-27T21:03:42Z) - Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models [50.40276881893513]
This study introduces Spoken Stereoset, a dataset specifically designed to evaluate social biases in Speech Large Language Models (SLLMs)
By examining how different models respond to speech from diverse demographic groups, we aim to identify these biases.
The findings indicate that while most models show minimal bias, some still exhibit slightly stereotypical or anti-stereotypical tendencies.
arXiv Detail & Related papers (2024-08-14T16:55:06Z) - AI-generated faces influence gender stereotypes and racial homogenization [1.6647208383676708]
We document significant biases in Stable Diffusion across six races, two genders, 32 professions, and eight attributes.
This analysis reveals significant racial homogenization depicting nearly all Middle Eastern men as bearded, brown-skinned, and wearing traditional attire.
We propose debiasing solutions that allow users to specify the desired distributions of race and gender when generating images.
arXiv Detail & Related papers (2024-02-01T20:32:14Z) - Understanding Divergent Framing of the Supreme Court Controversies:
Social Media vs. News Outlets [56.67097829383139]
We focus on the nuanced distinctions in framing of social media and traditional media outlets concerning a series of U.S. Supreme Court rulings.
We observe significant polarization in the news media's treatment of affirmative action and abortion rights, whereas the topic of student loans tends to exhibit a greater degree of consensus.
arXiv Detail & Related papers (2023-09-18T06:40:21Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models [6.92043136971035]
We investigate how multimodal models handle diverse gender identities.
We find certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised.
These improvements could pave the way for a future where change is led by the affected community.
arXiv Detail & Related papers (2023-05-26T16:28:49Z) - Fairness in AI Systems: Mitigating gender bias from language-vision
models [0.913755431537592]
We study the extent of the impact of gender bias in existing datasets.
We propose a methodology to mitigate its impact in caption based language vision models.
arXiv Detail & Related papers (2023-05-03T04:33:44Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - Studying Bias in GANs through the Lens of Race [91.95264864405493]
We study how the performance and evaluation of generative image models are impacted by the racial composition of their training datasets.
Our results show that the racial compositions of generated images successfully preserve that of the training data.
However, we observe that truncation, a technique used to generate higher quality images during inference, exacerbates racial imbalances in the data.
arXiv Detail & Related papers (2022-09-06T22:25:56Z) - Gender bias in magazines oriented to men and women: a computational
approach [58.720142291102135]
We compare the content of a women-oriented magazine with that of a men-oriented one, both produced by the same editorial group over a decade.
With Topic Modelling techniques we identify the main themes discussed in the magazines and quantify how much the presence of these topics differs between magazines over time.
Our results show that the frequency of appearance of the topics Family, Business and Women as sex objects, present an initial bias that tends to disappear over time.
arXiv Detail & Related papers (2020-11-24T14:02:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.