Uncovering Gender Bias in Media Coverage of Politicians with Machine
Learning
- URL: http://arxiv.org/abs/2005.07734v1
- Date: Fri, 15 May 2020 18:37:56 GMT
- Title: Uncovering Gender Bias in Media Coverage of Politicians with Machine
Learning
- Authors: Susan Leavy
- Abstract summary: This paper presents research uncovering systematic gender bias in the representation of political leaders in the media, using artificial intelligence.
Newspaper coverage of Irish ministers over a fifteen year period was gathered and analysed with natural language processing techniques and machine learning.
Findings demonstrate evidence of gender bias in the portrayal of female politicians, the kind of policies they were associated with and how they were evaluated in terms of their performance as political leaders.
- Score: 3.0839245814393723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents research uncovering systematic gender bias in the
representation of political leaders in the media, using artificial
intelligence. Newspaper coverage of Irish ministers over a fifteen year period
was gathered and analysed with natural language processing techniques and
machine learning. Findings demonstrate evidence of gender bias in the portrayal
of female politicians, the kind of policies they were associated with and how
they were evaluated in terms of their performance as political leaders. This
paper also sets out a methodology whereby media content may be analysed on a
large scale utilising techniques from artificial intelligence within a
theoretical framework founded in gender theory and feminist linguistics.
Related papers
- Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - Uncovering Political Bias in Emotion Inference Models: Implications for sentiment analysis in social science research [0.0]
This paper investigates the presence of political bias in machine learning models used for sentiment analysis (SA) in social science research.
We conducted a bias audit on a Polish sentiment analysis model developed in our lab.
Our findings indicate that annotations by human raters propagate political biases into the model's predictions.
arXiv Detail & Related papers (2024-07-18T20:31:07Z) - Finding the white male: The prevalence and consequences of algorithmic gender and race bias in political Google searches [0.0]
This article proposes and tests a framework of algorithmic representation of minoritized groups in a series of four studies.
First, two algorithm audits of political image searches delineate how search engines reflect and uphold structural inequalities by under- and misrepresenting women and non-white politicians.
Second, two online experiments show that these biases in algorithmic representation in turn distort perceptions of the political reality and actively reinforce a white and masculinized view of politics.
arXiv Detail & Related papers (2024-05-01T05:57:03Z) - Unveiling Gender Bias in Terms of Profession Across LLMs: Analyzing and
Addressing Sociological Implications [0.0]
The study examines existing research on gender bias in AI language models and identifies gaps in the current knowledge.
The findings shed light on gendered word associations, language usage, and biased narratives present in the outputs of Large Language Models.
The paper presents strategies for reducing gender bias in LLMs, including algorithmic approaches and data augmentation techniques.
arXiv Detail & Related papers (2023-07-18T11:38:45Z) - VisoGender: A dataset for benchmarking gender bias in image-text pronoun
resolution [80.57383975987676]
VisoGender is a novel dataset for benchmarking gender bias in vision-language models.
We focus on occupation-related biases within a hegemonic system of binary gender, inspired by Winograd and Winogender schemas.
We benchmark several state-of-the-art vision-language models and find that they demonstrate bias in resolving binary gender in complex scenes.
arXiv Detail & Related papers (2023-06-21T17:59:51Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - GenderedNews: Une approche computationnelle des \'ecarts de
repr\'esentation des genres dans la presse fran\c{c}aise [0.0]
We present it GenderedNews (urlhttps://gendered-news.imag.fr), an online dashboard which gives weekly measures of gender imbalance in French online press.
We use Natural Language Processing (NLP) methods to quantify gender inequalities in the media.
We describe the data collected daily (seven main titles of French online news media) and the methodology behind our metrics.
arXiv Detail & Related papers (2022-02-11T15:16:49Z) - Gender stereotypes in the mediated personalization of politics:
Empirical evidence from a lexical, syntactic and sentiment analysis [2.7071541526963805]
We show that the political personalization in Italy is more detrimental for women than men.
Women politicians are covered with a more negative tone than their men counterpart when personal details are reported.
The major contribution to the observed gender differences comes from online news rather than print news.
arXiv Detail & Related papers (2022-02-07T11:40:44Z) - Towards Understanding and Mitigating Social Biases in Language Models [107.82654101403264]
Large-scale pretrained language models (LMs) can be potentially dangerous in manifesting undesirable representational biases.
We propose steps towards mitigating social biases during text generation.
Our empirical results and human evaluation demonstrate effectiveness in mitigating bias while retaining crucial contextual information.
arXiv Detail & Related papers (2021-06-24T17:52:43Z) - Quantifying Gender Bias Towards Politicians in Cross-Lingual Language
Models [104.41668491794974]
We quantify the usage of adjectives and verbs generated by language models surrounding the names of politicians as a function of their gender.
We find that while some words such as dead, and designated are associated with both male and female politicians, a few specific words such as beautiful and divorced are predominantly associated with female politicians.
arXiv Detail & Related papers (2021-04-15T15:03:26Z) - Multi-Dimensional Gender Bias Classification [67.65551687580552]
Machine learning models can inadvertently learn socially undesirable patterns when training on gender biased text.
We propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions.
Using this fine-grained framework, we automatically annotate eight large scale datasets with gender information.
arXiv Detail & Related papers (2020-05-01T21:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.