Black Feminist Musings on Algorithmic Oppression
- URL: http://arxiv.org/abs/2101.09869v2
- Date: Wed, 3 Feb 2021 01:54:26 GMT
- Title: Black Feminist Musings on Algorithmic Oppression
- Authors: Lelia Marie Hampton
- Abstract summary: This paper unapologetically reflects on the critical role that Black feminism can and should play in abolishing algorithmic oppression.
I draw upon feminist philosophical critiques of science and technology and discuss histories and continuities of scientific oppression against historically marginalized people.
I end by inviting you to envision and imagine the struggle to abolish algorithmic oppression by abolishing oppressive systems and shifting algorithmic development practices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper unapologetically reflects on the critical role that Black feminism
can and should play in abolishing algorithmic oppression. Positioning
algorithmic oppression in the broader field of feminist science and technology
studies, I draw upon feminist philosophical critiques of science and technology
and discuss histories and continuities of scientific oppression against
historically marginalized people. Moreover, I examine the concepts of
invisibility and hypervisibility in oppressive technologies a l\'a the
canonical double bind. Furthermore, I discuss what it means to call for
diversity as a solution to algorithmic violence, and I critique dialectics of
the fairness, accountability, and transparency community. I end by inviting you
to envision and imagine the struggle to abolish algorithmic oppression by
abolishing oppressive systems and shifting algorithmic development practices,
including engaging our communities in scientific processes, centering
marginalized communities in design, and consensual data and algorithmic
practices.
Related papers
- A multitask learning framework for leveraging subjectivity of annotators to identify misogyny [47.175010006458436]
We propose a multitask learning approach to enhance the performance of the misogyny identification systems.
We incorporated diverse perspectives from annotators in our model design, considering gender and age across six profile groups.
This research advances content moderation and highlights the importance of embracing diverse perspectives to build effective online moderation systems.
arXiv Detail & Related papers (2024-06-22T15:06:08Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Why Algorithms Remain Unjust: Power Structures Surrounding Algorithmic Activity [0.0]
Reformists have failed to curtail algorithmic injustice because they ignore the power structure surrounding algorithms.
I argue that the reason Algorithmic Activity is unequal, undemocratic, and unsustainable is that the power structure shaping it is one of economic empowerment rather than social empowerment.
arXiv Detail & Related papers (2024-05-28T17:49:24Z) - Data Feminism for AI [2.181420782258584]
In Data Feminism (2020), we offered seven principles for examining and challenging unequal power in data science.
Here, we present a rationale for why feminism remains deeply relevant for AI research, rearticulate the original principles of data feminism with respect to AI, and introduce two potential new principles related to environmental impact and consent.
These principles help to 1) account for the unequal, undemocratic, extractive, and exclusionary forces at work in AI research, development, and deployment; 2) identify and predictable harms in advance of unsafe, discriminatory, or otherwise oppressive systems being released into the world; and 3) inspire creative, joyful, and collective ways
arXiv Detail & Related papers (2024-05-02T13:46:29Z) - Finding the white male: The prevalence and consequences of algorithmic gender and race bias in political Google searches [0.0]
This article proposes and tests a framework of algorithmic representation of minoritized groups in a series of four studies.
First, two algorithm audits of political image searches delineate how search engines reflect and uphold structural inequalities by under- and misrepresenting women and non-white politicians.
Second, two online experiments show that these biases in algorithmic representation in turn distort perceptions of the political reality and actively reinforce a white and masculinized view of politics.
arXiv Detail & Related papers (2024-05-01T05:57:03Z) - Algorithmic Fairness: A Tolerance Perspective [31.882207568746168]
This survey delves into the existing literature on algorithmic fairness, specifically highlighting its multifaceted social consequences.
We introduce a novel taxonomy based on 'tolerance', a term we define as the degree to which variations in fairness outcomes are acceptable.
Our systematic review covers diverse industries, revealing critical insights into the balance between algorithmic decision making and social equity.
arXiv Detail & Related papers (2024-04-26T08:16:54Z) - Tokenization Matters: Navigating Data-Scarce Tokenization for Gender Inclusive Language Technologies [75.85462924188076]
Gender-inclusive NLP research has documented the harmful limitations of gender binary-centric large language models (LLM)
We find that misgendering is significantly influenced by Byte-Pair (BPE) tokenization.
We propose two techniques: (1) pronoun tokenization parity, a method to enforce consistent tokenization across gendered pronouns, and (2) utilizing pre-existing LLM pronoun knowledge to improve neopronoun proficiency.
arXiv Detail & Related papers (2023-12-19T01:28:46Z) - "I'm fully who I am": Towards Centering Transgender and Non-Binary
Voices to Measure Biases in Open Language Generation [69.25368160338043]
Transgender and non-binary (TGNB) individuals disproportionately experience discrimination and exclusion from daily life.
We assess how the social reality surrounding experienced marginalization of TGNB persons contributes to and persists within Open Language Generation.
We introduce TANGO, a dataset of template-based real-world text curated from a TGNB-oriented community.
arXiv Detail & Related papers (2023-05-17T04:21:45Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Calling for a feminist revolt to decolonise data and algorithms in the
age of Datification [2.0559497209595823]
Digital colonisation occupies humanity's mind's essence, i.e., imagination and imaginary.
Militant groups are imagining and designing alternative algorithms, datasets collection strategies and appropriation methods.
arXiv Detail & Related papers (2022-09-17T12:13:04Z) - Algorithmic Fairness and Structural Injustice: Insights from Feminist
Political Philosophy [2.28438857884398]
'Algorithmic fairness' aims to mitigate harmful biases in data-driven algorithms.
The perspectives of feminist political philosophers on social justice have been largely neglected.
This paper brings some key insights of feminist political philosophy to algorithmic fairness.
arXiv Detail & Related papers (2022-06-02T09:18:03Z) - Towards decolonising computational sciences [0.0]
We see this struggle as requiring two basic steps.
grappling with our fields' histories and heritage holds the key to avoiding mistakes of the past.
We aspire for these fields to progress away from their stagnant, sexist, and racist shared past.
arXiv Detail & Related papers (2020-09-29T18:48:28Z) - A Framework for the Computational Linguistic Analysis of Dehumanization [52.735780962665814]
We analyze discussions of LGBTQ people in the New York Times from 1986 to 2015.
We find increasingly humanizing descriptions of LGBTQ people over time.
The ability to analyze dehumanizing language at a large scale has implications for automatically detecting and understanding media bias as well as abusive language online.
arXiv Detail & Related papers (2020-03-06T03:02:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.