#ContextMatters: Advantages and Limitations of Using Machine Learning to
Support Women in Politics
- URL: http://arxiv.org/abs/2110.00116v1
- Date: Thu, 30 Sep 2021 22:55:49 GMT
- Title: #ContextMatters: Advantages and Limitations of Using Machine Learning to
Support Women in Politics
- Authors: Jacqueline Comer, Sam Work, Kory W Mathewson, Lana Cuthbertson, Kasey
Machin
- Abstract summary: ParityBOT was deployed across elections in Canada, the United States and New Zealand.
It was used to analyse and classify more than 12 million tweets directed at women candidates and counter toxic tweets with supportive ones.
We examine the rate of false negatives, where ParityBOT failed to pick up on insults directed at specific high profile women.
- Score: 0.15749416770494704
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The United Nations identified gender equality as a Sustainable Development
Goal in 2015, recognizing the underrepresentation of women in politics as a
specific barrier to achieving gender equality. Political systems around the
world experience gender inequality across all levels of elected government as
fewer women run for office than men. This is due in part to online abuse,
particularly on social media platforms like Twitter, where women seeking or in
power tend to be targeted with more toxic maltreatment than their male
counterparts. In this paper, we present reflections on ParityBOT - the first
natural language processing-based intervention designed to affect online
discourse for women in politics for the better, at scale. Deployed across
elections in Canada, the United States and New Zealand, ParityBOT was used to
analyse and classify more than 12 million tweets directed at women candidates
and counter toxic tweets with supportive ones. From these elections we present
three case studies highlighting the current limitations of, and future research
and application opportunities for, using a natural language processing-based
system to detect online toxicity, specifically with regards to contextually
important microaggressions. We examine the rate of false negatives, where
ParityBOT failed to pick up on insults directed at specific high profile women,
which would be obvious to human users. We examine the unaddressed harms of
microaggressions and the potential of yet unseen damage they cause for women in
these communities, and for progress towards gender equality overall, in light
of these technological blindspots. This work concludes with a discussion on the
benefits of partnerships between nonprofit social groups and technology experts
to develop responsible, socially impactful approaches to addressing online
hate.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - On the Use of Proxies in Political Ad Targeting [49.61009579554272]
We show that major political advertisers circumvented mitigations by targeting proxy attributes.
Our findings have crucial implications for the ongoing discussion on the regulation of political advertising.
arXiv Detail & Related papers (2024-10-18T17:15:13Z) - GenderCARE: A Comprehensive Framework for Assessing and Reducing Gender Bias in Large Language Models [73.23743278545321]
Large language models (LLMs) have exhibited remarkable capabilities in natural language generation, but have also been observed to magnify societal biases.
GenderCARE is a comprehensive framework that encompasses innovative Criteria, bias Assessment, Reduction techniques, and Evaluation metrics.
arXiv Detail & Related papers (2024-08-22T15:35:46Z) - Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words [85.48043537327258]
Existing machine translation gender bias evaluations are primarily focused on male and female genders.
This study presents a benchmark AmbGIMT (Gender-Inclusive Machine Translation with Ambiguous attitude words)
We propose a novel process to evaluate gender bias based on the Emotional Attitude Score (EAS), which is used to quantify ambiguous attitude words.
arXiv Detail & Related papers (2024-07-23T08:13:51Z) - A Holistic Indicator of Polarization to Measure Online Sexism [2.498836880652668]
The online trend of the manosphere and feminist discourse on social networks requires a holistic measure of the level of sexism in an online community.
This indicator is important for policymakers and moderators of online communities.
We build a model that can provide a comparable holistic indicator of toxicity targeted toward male and female identity and male and female individuals.
arXiv Detail & Related papers (2024-04-02T18:00:42Z) - A Multilingual Perspective on Probing Gender Bias [0.0]
Gender bias is a form of systematic negative treatment that targets individuals based on their gender.
This thesis investigates the nuances of how gender bias is expressed through language and within language technologies.
arXiv Detail & Related papers (2024-03-15T21:35:21Z) - Anti-Sexism Alert System: Identification of Sexist Comments on Social
Media Using AI Techniques [0.0]
Sexist comments that are publicly posted in social media (newspaper comments, social networks, etc.) usually obtain a lot of attention and become viral, with consequent damage to the persons involved.
In this paper, we introduce an anti-sexism alert system, based on natural language processing (NLP) and artificial intelligence (AI)
This system analyzes any public post, and decides if it could be considered a sexist comment or not.
arXiv Detail & Related papers (2023-11-28T19:48:46Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Towards Understanding Gender-Seniority Compound Bias in Natural Language
Generation [64.65911758042914]
We investigate how seniority impacts the degree of gender bias exhibited in pretrained neural generation models.
Our results show that GPT-2 amplifies bias by considering women as junior and men as senior more often than the ground truth in both domains.
These results suggest that NLP applications built using GPT-2 may harm women in professional capacities.
arXiv Detail & Related papers (2022-05-19T20:05:02Z) - Quantifying Gender Biases Towards Politicians on Reddit [19.396806939258806]
Despite attempts to increase gender parity in politics, global efforts have struggled to ensure equal female representation.
This is likely tied to implicit gender biases against women in authority.
We present a comprehensive study of gender biases that appear in online political discussion.
arXiv Detail & Related papers (2021-12-22T16:39:14Z) - 2020 U.S. Presidential Election: Analysis of Female and Male Users on
Twitter [8.651122862855495]
Current literature mainly focuses on analyzing the content of tweets without considering the gender of users.
This research collects and analyzes a large number of tweets posted during the 2020 U.S. presidential election.
Our findings are based upon a wide range of topics, such as tax, climate change, and the COVID-19 pandemic.
arXiv Detail & Related papers (2021-08-21T01:31:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.