The Algorithmic Imprint
- URL: http://arxiv.org/abs/2206.03275v1
- Date: Fri, 3 Jun 2022 15:44:44 GMT
- Title: The Algorithmic Imprint
- Authors: Upol Ehsan, Ranjit Singh, Jacob Metcalf, Mark O. Riedl
- Abstract summary: We introduce the notion of the "algorithmic imprint" to illustrate how merely removing an algorithm does not necessarily undo or mitigate its consequences.
We show how the removal failed to undo the algorithmic imprint on the sociotechnical infrastructures that shape students', teachers', and parents' lives.
We situate our case study in Bangladesh to illustrate how algorithms made in the Global North disproportionately impact stakeholders in the Global South.
- Score: 15.273400795479223
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: When algorithmic harms emerge, a reasonable response is to stop using the
algorithm to resolve concerns related to fairness, accountability,
transparency, and ethics (FATE). However, just because an algorithm is removed
does not imply its FATE-related issues cease to exist. In this paper, we
introduce the notion of the "algorithmic imprint" to illustrate how merely
removing an algorithm does not necessarily undo or mitigate its consequences.
We operationalize this concept and its implications through the 2020 events
surrounding the algorithmic grading of the General Certificate of Education
(GCE) Advanced (A) Level exams, an internationally recognized UK-based high
school diploma exam administered in over 160 countries. While the algorithmic
standardization was ultimately removed due to global protests, we show how the
removal failed to undo the algorithmic imprint on the sociotechnical
infrastructures that shape students', teachers', and parents' lives. These
events provide a rare chance to analyze the state of the world both with and
without algorithmic mediation. We situate our case study in Bangladesh to
illustrate how algorithms made in the Global North disproportionately impact
stakeholders in the Global South. Chronicling more than a year-long community
engagement consisting of 47 inter-views, we present the first coherent timeline
of "what" happened in Bangladesh, contextualizing "why" and "how" they happened
through the lenses of the algorithmic imprint and situated algorithmic
fairness. Analyzing these events, we highlight how the contours of the
algorithmic imprints can be inferred at the infrastructural, social, and
individual levels. We share conceptual and practical implications around how
imprint-awareness can (a) broaden the boundaries of how we think about
algorithmic impact, (b) inform how we design algorithms, and (c) guide us in AI
governance.
Related papers
- Why Algorithms Remain Unjust: Power Structures Surrounding Algorithmic Activity [0.0]
Reformists have failed to curtail algorithmic injustice because they ignore the power structure surrounding algorithms.
I argue that the reason Algorithmic Activity is unequal, undemocratic, and unsustainable is that the power structure shaping it is one of economic empowerment rather than social empowerment.
arXiv Detail & Related papers (2024-05-28T17:49:24Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Discovering General Reinforcement Learning Algorithms with Adversarial
Environment Design [54.39859618450935]
We show that it is possible to meta-learn update rules, with the hope of discovering algorithms that can perform well on a wide range of RL tasks.
Despite impressive initial results from algorithms such as Learned Policy Gradient (LPG), there remains a gap when these algorithms are applied to unseen environments.
In this work, we examine how characteristics of the meta-supervised-training distribution impact the performance of these algorithms.
arXiv Detail & Related papers (2023-10-04T12:52:56Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - How to transfer algorithmic reasoning knowledge to learn new algorithms? [23.335939830754747]
We investigate how we can use algorithms for which we have access to the execution trace to learn to solve similar tasks for which we do not.
We create a dataset including 9 algorithms and 3 different graph types.
We validate this empirically and show how instead multi-task learning can be used to achieve the transfer of algorithmic reasoning knowledge.
arXiv Detail & Related papers (2021-10-26T22:14:47Z) - Beyond Algorithmic Bias: A Socio-Computational Interrogation of the
Google Search by Image Algorithm [0.799536002595393]
We audit the algorithm by presenting it with more than 40 thousands faces of all ages and more than four races.
We find that the algorithm reproduces white male patriarchal structures, often simplifying, stereotyping and discriminating females and non-white individuals.
arXiv Detail & Related papers (2021-05-26T21:40:43Z) - An Introduction to Algorithmic Fairness [0.0]
We list different types of fairness-related harms, explain two main notions of algorithmic fairness, and map the biases that these harms upon the machine learning development process.
arXiv Detail & Related papers (2021-05-12T11:26:34Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - A Brief Look at Generalization in Visual Meta-Reinforcement Learning [56.50123642237106]
We evaluate the generalization performance of meta-reinforcement learning algorithms.
We find that these algorithms can display strong overfitting when they are evaluated on challenging tasks.
arXiv Detail & Related papers (2020-06-12T15:17:17Z) - Adversarial Online Learning with Changing Action Sets: Efficient
Algorithms with Approximate Regret Bounds [48.312484940846]
We revisit the problem of online learning with sleeping experts/bandits.
In each time step, only a subset of the actions are available for the algorithm to choose from.
We give an algorithm that provides a no-approximate-regret guarantee for the general sleeping expert/bandit problems.
arXiv Detail & Related papers (2020-03-07T02:13:21Z) - Algorithmic Fairness [11.650381752104298]
It is crucial to develop AI algorithms that are not only accurate but also objective and fair.
Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness.
arXiv Detail & Related papers (2020-01-21T19:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.