Beyond Algorithmic Bias: A Socio-Computational Interrogation of the
Google Search by Image Algorithm
- URL: http://arxiv.org/abs/2105.12856v2
- Date: Sat, 12 Jun 2021 19:58:51 GMT
- Title: Beyond Algorithmic Bias: A Socio-Computational Interrogation of the
Google Search by Image Algorithm
- Authors: Orestis Papakyriakopoulos and Arwa Michelle Mboya
- Abstract summary: We audit the algorithm by presenting it with more than 40 thousands faces of all ages and more than four races.
We find that the algorithm reproduces white male patriarchal structures, often simplifying, stereotyping and discriminating females and non-white individuals.
- Score: 0.799536002595393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We perform a socio-computational interrogation of the google search by image
algorithm, a main component of the google search engine. We audit the algorithm
by presenting it with more than 40 thousands faces of all ages and more than
four races and collecting and analyzing the assigned labels with the
appropriate statistical tools. We find that the algorithm reproduces white male
patriarchal structures, often simplifying, stereotyping and discriminating
females and non-white individuals, while providing more diverse and positive
descriptions of white men. By drawing from Bourdieu's theory of cultural
reproduction, we link these results to the attitudes of the algorithm's
designers, owners, and the dataset the algorithm was trained on. We further
underpin the problematic nature of the algorithm by using the ethnographic
practice of studying-up: We show how the algorithm places individuals at the
top of the tech industry within the socio-cultural reality that they shaped,
many times creating biased representations of them. We claim that the use of
social-theoretic frameworks such as the above are able to contribute to
improved algorithmic accountability, algorithmic impact assessment and provide
additional and more critical depth in algorithmic bias and auditing studies.
Based on the analysis, we discuss the scientific and design implications and
provide suggestions for alternative ways to design just socioalgorithmic
systems.
Related papers
- Finding the white male: The prevalence and consequences of algorithmic gender and race bias in political Google searches [0.0]
This article proposes and tests a framework of algorithmic representation of minoritized groups in a series of four studies.
First, two algorithm audits of political image searches delineate how search engines reflect and uphold structural inequalities by under- and misrepresenting women and non-white politicians.
Second, two online experiments show that these biases in algorithmic representation in turn distort perceptions of the political reality and actively reinforce a white and masculinized view of politics.
arXiv Detail & Related papers (2024-05-01T05:57:03Z) - Multi-Dimensional Ability Diagnosis for Machine Learning Algorithms [88.93372675846123]
We propose a task-agnostic evaluation framework Camilla for evaluating machine learning algorithms.
We use cognitive diagnosis assumptions and neural networks to learn the complex interactions among algorithms, samples and the skills of each sample.
In our experiments, Camilla outperforms state-of-the-art baselines on the metric reliability, rank consistency and rank stability.
arXiv Detail & Related papers (2023-07-14T03:15:56Z) - A Gold Standard Dataset for the Reviewer Assignment Problem [117.59690218507565]
"Similarity score" is a numerical estimate of the expertise of a reviewer in reviewing a paper.
Our dataset consists of 477 self-reported expertise scores provided by 58 researchers.
For the task of ordering two papers in terms of their relevance for a reviewer, the error rates range from 12%-30% in easy cases to 36%-43% in hard cases.
arXiv Detail & Related papers (2023-03-23T16:15:03Z) - Diversity matters: Robustness of bias measurements in Wikidata [4.950095974653716]
We reveal data biases that surface in Wikidata for thirteen different demographics selected from seven continents.
We conduct our extensive experiments on a large number of occupations sampled from the thirteen demographics with respect to the sensitive attribute, i.e., gender.
We show that the choice of the state-of-the-art KG embedding algorithm has a strong impact on the ranking of biased occupations irrespective of gender.
arXiv Detail & Related papers (2023-02-27T18:38:10Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Reinforcement Learning Algorithms: An Overview and Classification [0.0]
We identify three main environment types and classify reinforcement learning algorithms according to those environment types.
The overview of each algorithm provides insight into the algorithms' foundations and reviews similarities and differences among algorithms.
arXiv Detail & Related papers (2022-09-29T16:58:42Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - How to transfer algorithmic reasoning knowledge to learn new algorithms? [23.335939830754747]
We investigate how we can use algorithms for which we have access to the execution trace to learn to solve similar tasks for which we do not.
We create a dataset including 9 algorithms and 3 different graph types.
We validate this empirically and show how instead multi-task learning can be used to achieve the transfer of algorithmic reasoning knowledge.
arXiv Detail & Related papers (2021-10-26T22:14:47Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.