Gender bias in magazines oriented to men and women: a computational
approach
- URL: http://arxiv.org/abs/2011.12096v1
- Date: Tue, 24 Nov 2020 14:02:49 GMT
- Title: Gender bias in magazines oriented to men and women: a computational
approach
- Authors: Diego Kozlowski, Gabriela Lozano, Carla M. Felcher, Fernando Gonzalez
and Edgar Altszyler
- Abstract summary: We compare the content of a women-oriented magazine with that of a men-oriented one, both produced by the same editorial group over a decade.
With Topic Modelling techniques we identify the main themes discussed in the magazines and quantify how much the presence of these topics differs between magazines over time.
Our results show that the frequency of appearance of the topics Family, Business and Women as sex objects, present an initial bias that tends to disappear over time.
- Score: 58.720142291102135
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cultural products are a source to acquire individual values and behaviours.
Therefore, the differences in the content of the magazines aimed specifically
at women or men are a means to create and reproduce gender stereotypes. In this
study, we compare the content of a women-oriented magazine with that of a
men-oriented one, both produced by the same editorial group, over a decade
(2008-2018). With Topic Modelling techniques we identify the main themes
discussed in the magazines and quantify how much the presence of these topics
differs between magazines over time. Then, we performed a word-frequency
analysis to validate this methodology and extend the analysis to other subjects
that did not emerge automatically. Our results show that the frequency of
appearance of the topics Family, Business and Women as sex objects, present an
initial bias that tends to disappear over time. Conversely, in Fashion and
Science topics, the initial differences between both magazines are maintained.
Besides, we show that in 2012, the content associated with horoscope increased
in the women-oriented magazine, generating a new gap that remained open over
time. Also, we show a strong increase in the use of words associated with
feminism since 2015 and specifically the word abortion in 2018. Overall, these
computational tools allowed us to analyse more than 24,000 articles. Up to our
knowledge, this is the first study to compare magazines in such a large
dataset, a task that would have been prohibitive using manual content analysis
methodologies.
Related papers
- Automatic Classification of News Subjects in Broadcast News: Application to a Gender Bias Representation Analysis [1.4100823284870105]
This paper introduces a computational framework designed to delineate gender distribution biases in topics covered by French TV and radio news.
We transcribe a dataset of 11.7k hours, broadcasted in 2023 on 21 French channels.
We show that women are notably underrepresented in subjects such as sports, politics and conflicts.
arXiv Detail & Related papers (2024-07-19T10:15:45Z) - Inclusivity in Large Language Models: Personality Traits and Gender Bias in Scientific Abstracts [49.97673761305336]
We evaluate three large language models (LLMs) for their alignment with human narrative styles and potential gender biases.
Our findings indicate that, while these models generally produce text closely resembling human authored content, variations in stylistic features suggest significant gender biases.
arXiv Detail & Related papers (2024-06-27T19:26:11Z) - Diverse, but Divisive: LLMs Can Exaggerate Gender Differences in Opinion
Related to Harms of Misinformation [8.066880413153187]
This paper examines whether a large language model (LLM) can reflect the views of various groups when assessing the harms of misinformation.
We present the TopicMisinfo dataset, containing 160 fact-checked claims from diverse topics.
We find that GPT 3.5-Turbo reflects empirically observed gender differences in opinion but amplifies the extent of these differences.
arXiv Detail & Related papers (2024-01-29T20:50:28Z) - Multilingual Text-to-Image Generation Magnifies Gender Stereotypes and Prompt Engineering May Not Help You [64.74707085021858]
We show that multilingual models suffer from significant gender biases just as monolingual models do.
We propose a novel benchmark, MAGBIG, intended to foster research on gender bias in multilingual models.
Our results show that not only do models exhibit strong gender biases but they also behave differently across languages.
arXiv Detail & Related papers (2024-01-29T12:02:28Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Auditing Gender Presentation Differences in Text-to-Image Models [54.16959473093973]
We study how gender is presented differently in text-to-image models.
By probing gender indicators in the input text, we quantify the frequency differences of presentation-centric attributes.
We propose an automatic method to estimate such differences.
arXiv Detail & Related papers (2023-02-07T18:52:22Z) - Investigating writing style as a contributor to gender gaps in science and technology [0.0]
We find significant differences in writing style by gender, with women using more involved features in their writing.
Papers and patents with more involved features also tend to be cited more by women.
Our findings suggest that scientific text is not devoid of personal character, which could contribute to bias in evaluation.
arXiv Detail & Related papers (2022-04-28T22:33:36Z) - GenderedNews: Une approche computationnelle des \'ecarts de
repr\'esentation des genres dans la presse fran\c{c}aise [0.0]
We present it GenderedNews (urlhttps://gendered-news.imag.fr), an online dashboard which gives weekly measures of gender imbalance in French online press.
We use Natural Language Processing (NLP) methods to quantify gender inequalities in the media.
We describe the data collected daily (seven main titles of French online news media) and the methodology behind our metrics.
arXiv Detail & Related papers (2022-02-11T15:16:49Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.