Inclusive content reduces racial and gender biases, yet non-inclusive content dominates popular media outlets
- URL: http://arxiv.org/abs/2405.06404v1
- Date: Fri, 10 May 2024 11:34:47 GMT
- Title: Inclusive content reduces racial and gender biases, yet non-inclusive content dominates popular media outlets
- Authors: Nouar AlDahoul, Hazem Ibrahim, Minsu Park, Talal Rahwan, Yasir Zaki,
- Abstract summary: This study examines the manner in which racial and gender groups are portrayed in popular media imagery.
We collect over 300,000 images spanning over five decades and utilize state-of-the-art machine learning models.
We find that racial minorities appear far less frequently than their White counterparts.
We also find that women are more likely to be portrayed with their full bodies in images, whereas men are more frequently presented with their faces.
- Score: 1.4204016278692333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Images are often termed as representations of perceived reality. As such, racial and gender biases in popular media imagery could play a vital role in shaping people's perceptions of society. While inquiries into such biases have examined the frequency at which different racial and gender groups appear in different forms of media, the literature still lacks a large-scale longitudinal study that further examines the manner in which these groups are portrayed. To fill this gap, we examine three media forms, namely fashion magazines, movie posters, and advertisements. To do so, we collect a large dataset comprising over 300,000 images spanning over five decades and utilize state-of-the-art machine learning models to not only classify race and gender but also identify the posture, emotional state, and body composition of the person featured in each image. We find that racial minorities appear far less frequently than their White counterparts, and when they do appear, they are portrayed less prominently and tend to convey more negative emotions. We also find that women are more likely to be portrayed with their full bodies in images, whereas men are more frequently presented with their faces. This disparity exemplifies face-ism, where emphasizing faces over bodies has been linked to perceptions of higher competence and intelligence. Finally, through a series of survey experiments, we show that exposure to inclusive content-rather than racially and gender-homogenized content -- significantly reduces perception biases towards minorities in areas such as household income, hiring merit, beauty standards, leadership positions, and the representation of women in the workplace. Taken together, our findings demonstrate that racial and gender biases in media continue to be an ongoing problem that may exacerbate existing stereotypes.
Related papers
- A Longitudinal Analysis of Racial and Gender Bias in New York Times and Fox News Images and Articles [2.482116411483087]
We use a dataset of 123,337 images and 441,321 online news articles from New York Times (NYT) and Fox News (Fox)
We examine the frequency and prominence of appearance of racial and gender groups in images embedded in news articles.
We find that NYT largely features more images of racial minority groups compared to Fox.
arXiv Detail & Related papers (2024-10-29T09:42:54Z) - Spoken Stereoset: On Evaluating Social Bias Toward Speaker in Speech Large Language Models [50.40276881893513]
This study introduces Spoken Stereoset, a dataset specifically designed to evaluate social biases in Speech Large Language Models (SLLMs)
By examining how different models respond to speech from diverse demographic groups, we aim to identify these biases.
The findings indicate that while most models show minimal bias, some still exhibit slightly stereotypical or anti-stereotypical tendencies.
arXiv Detail & Related papers (2024-08-14T16:55:06Z) - AI-generated faces influence gender stereotypes and racial homogenization [1.6647208383676708]
We document significant biases in Stable Diffusion across six races, two genders, 32 professions, and eight attributes.
This analysis reveals significant racial homogenization depicting nearly all middle eastern men as dark-skinned, bearded, and wearing a traditional headdress.
Using a preregistered experiment, we show that being presented with inclusive AI-generated faces reduces people's racial and gender biases, while being presented with non-inclusive ones increases such biases.
arXiv Detail & Related papers (2024-02-01T20:32:14Z) - Understanding Divergent Framing of the Supreme Court Controversies:
Social Media vs. News Outlets [56.67097829383139]
We focus on the nuanced distinctions in framing of social media and traditional media outlets concerning a series of U.S. Supreme Court rulings.
We observe significant polarization in the news media's treatment of affirmative action and abortion rights, whereas the topic of student loans tends to exhibit a greater degree of consensus.
arXiv Detail & Related papers (2023-09-18T06:40:21Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Stereotypes and Smut: The (Mis)representation of Non-cisgender
Identities by Text-to-Image Models [6.92043136971035]
We investigate how multimodal models handle diverse gender identities.
We find certain non-cisgender identities are consistently (mis)represented as less human, more stereotyped and more sexualised.
These improvements could pave the way for a future where change is led by the affected community.
arXiv Detail & Related papers (2023-05-26T16:28:49Z) - Fairness in AI Systems: Mitigating gender bias from language-vision
models [0.913755431537592]
We study the extent of the impact of gender bias in existing datasets.
We propose a methodology to mitigate its impact in caption based language vision models.
arXiv Detail & Related papers (2023-05-03T04:33:44Z) - Easily Accessible Text-to-Image Generation Amplifies Demographic
Stereotypes at Large Scale [61.555788332182395]
We investigate the potential for machine learning models to amplify dangerous and complex stereotypes.
We find a broad range of ordinary prompts produce stereotypes, including prompts simply mentioning traits, descriptors, occupations, or objects.
arXiv Detail & Related papers (2022-11-07T18:31:07Z) - Are Commercial Face Detection Models as Biased as Academic Models? [64.71318433419636]
We compare academic and commercial face detection systems, specifically examining robustness to noise.
We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness.
We conclude that commercial models are always as biased or more biased than an academic model.
arXiv Detail & Related papers (2022-01-25T02:21:42Z) - Gender bias in magazines oriented to men and women: a computational
approach [58.720142291102135]
We compare the content of a women-oriented magazine with that of a men-oriented one, both produced by the same editorial group over a decade.
With Topic Modelling techniques we identify the main themes discussed in the magazines and quantify how much the presence of these topics differs between magazines over time.
Our results show that the frequency of appearance of the topics Family, Business and Women as sex objects, present an initial bias that tends to disappear over time.
arXiv Detail & Related papers (2020-11-24T14:02:49Z) - Computational appraisal of gender representativeness in popular movies [0.0]
This article illustrates how automated computational methods may be used to scale up such empirical observations.
We specifically apply a face and gender detection algorithm on a broad set of popular movies spanning more than three decades to carry out a large-scale appraisal of the on-screen presence of women and men.
arXiv Detail & Related papers (2020-09-16T13:15:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.