Finding the white male: The prevalence and consequences of algorithmic gender and race bias in political Google searches
- URL: http://arxiv.org/abs/2405.00335v1
- Date: Wed, 1 May 2024 05:57:03 GMT
- Title: Finding the white male: The prevalence and consequences of algorithmic gender and race bias in political Google searches
- Authors: Tobias Rohrbach, Mykola Makhortykh, Maryna Sydorova,
- Abstract summary: This article proposes and tests a framework of algorithmic representation of minoritized groups in a series of four studies.
First, two algorithm audits of political image searches delineate how search engines reflect and uphold structural inequalities by under- and misrepresenting women and non-white politicians.
Second, two online experiments show that these biases in algorithmic representation in turn distort perceptions of the political reality and actively reinforce a white and masculinized view of politics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Search engines like Google have become major information gatekeepers that use artificial intelligence (AI) to determine who and what voters find when searching for political information. This article proposes and tests a framework of algorithmic representation of minoritized groups in a series of four studies. First, two algorithm audits of political image searches delineate how search engines reflect and uphold structural inequalities by under- and misrepresenting women and non-white politicians. Second, two online experiments show that these biases in algorithmic representation in turn distort perceptions of the political reality and actively reinforce a white and masculinized view of politics. Together, the results have substantive implications for the scientific understanding of how AI technology amplifies biases in political perceptions and decision-making. The article contributes to ongoing public debates and cross-disciplinary research on algorithmic fairness and injustice.
Related papers
- Biased AI can Influence Political Decision-Making [64.9461133083473]
This paper presents two experiments investigating the effects of partisan bias in AI language models on political decision-making.
We found that participants exposed to politically biased models were significantly more likely to adopt opinions and make decisions aligning with the AI's bias.
arXiv Detail & Related papers (2024-10-08T22:56:00Z) - Representation Bias in Political Sample Simulations with Large Language Models [54.48283690603358]
This study seeks to identify and quantify biases in simulating political samples with Large Language Models.
Using the GPT-3.5-Turbo model, we leverage data from the American National Election Studies, German Longitudinal Election Study, Zuobiao dataset, and China Family Panel Studies.
arXiv Detail & Related papers (2024-07-16T05:52:26Z) - (Unfair) Norms in Fairness Research: A Meta-Analysis [6.395584220342517]
We conduct a meta-analysis of algorithmic fairness papers from two leading conferences on AI fairness and ethics.
Our investigation reveals two concerning trends: first, a US-centric perspective dominates throughout fairness research.
Second, fairness studies exhibit a widespread reliance on binary codifications of human identity.
arXiv Detail & Related papers (2024-06-17T17:14:47Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Rethinking Fairness: An Interdisciplinary Survey of Critiques of
Hegemonic ML Fairness Approaches [0.0]
This survey article assesses and compares critiques of current fairness-enhancing technical interventions into machine learning (ML)
It draws from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies.
The article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.
arXiv Detail & Related papers (2022-05-06T14:27:57Z) - Fine-Grained Prediction of Political Leaning on Social Media with
Unsupervised Deep Learning [0.9137554315375922]
We propose a novel unsupervised technique for learning fine-grained political leaning from social media posts.
Our results pave the way for the development of new and better unsupervised approaches for the detection of fine-grained political leaning.
arXiv Detail & Related papers (2022-02-23T09:18:13Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Algorithmic Amplification of Politics on Twitter [17.631887805091733]
We provide quantitative evidence from a massive-scale randomized experiment on the Twitter platform.
We studied Tweets by elected legislators from major political parties in 7 countries.
In 6 out of 7 countries studied, the mainstream political right enjoys higher algorithmic amplification than the mainstream political left.
arXiv Detail & Related papers (2021-10-21T09:25:39Z) - Beyond Algorithmic Bias: A Socio-Computational Interrogation of the
Google Search by Image Algorithm [0.799536002595393]
We audit the algorithm by presenting it with more than 40 thousands faces of all ages and more than four races.
We find that the algorithm reproduces white male patriarchal structures, often simplifying, stereotyping and discriminating females and non-white individuals.
arXiv Detail & Related papers (2021-05-26T21:40:43Z) - The Matter of Chance: Auditing Web Search Results Related to the 2020
U.S. Presidential Primary Elections Across Six Search Engines [68.8204255655161]
We look at the text search results for "us elections", "donald trump", "joe biden" and "bernie sanders" queries on Google, Baidu, Bing, DuckDuckGo, Yahoo, and Yandex.
Our findings indicate substantial differences in the search results between search engines and multiple discrepancies within the results generated for different agents.
arXiv Detail & Related papers (2021-05-03T11:18:19Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.