Accountability in an Algorithmic Society: Relationality, Responsibility,
and Robustness in Machine Learning
- URL: http://arxiv.org/abs/2202.05338v3
- Date: Fri, 13 May 2022 23:55:48 GMT
- Title: Accountability in an Algorithmic Society: Relationality, Responsibility,
and Robustness in Machine Learning
- Authors: A. Feder Cooper and Emanuel Moss and Benjamin Laufer and Helen
Nissenbaum
- Abstract summary: In 1996, Nissenbaum issued a clarion call concerning the erosion of accountability in society due to the ubiquitous delegation of consequential functions to computerized systems.
We revisit Nissenbaum's original paper in relation to the ascendance of data-driven algorithmic systems.
We discuss how the barriers present difficulties for instantiating a unified moral, relational framework in practice for data-driven algorithmic systems.
- Score: 4.958893997693021
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In 1996, Accountability in a Computerized Society [95] issued a clarion call
concerning the erosion of accountability in society due to the ubiquitous
delegation of consequential functions to computerized systems. Nissenbaum [95]
described four barriers to accountability that computerization presented, which
we revisit in relation to the ascendance of data-driven algorithmic
systems--i.e., machine learning or artificial intelligence--to uncover new
challenges for accountability that these systems present. Nissenbaum's original
paper grounded discussion of the barriers in moral philosophy; we bring this
analysis together with recent scholarship on relational accountability
frameworks and discuss how the barriers present difficulties for instantiating
a unified moral, relational framework in practice for data-driven algorithmic
systems. We conclude by discussing ways of weakening the barriers in order to
do so.
Related papers
- Causal Responsibility Attribution for Human-AI Collaboration [62.474732677086855]
This paper presents a causal framework using Structural Causal Models (SCMs) to systematically attribute responsibility in human-AI systems.
Two case studies illustrate the framework's adaptability in diverse human-AI collaboration scenarios.
arXiv Detail & Related papers (2024-11-05T17:17:45Z) - Unraveling the Nuances of AI Accountability: A Synthesis of Dimensions Across Disciplines [0.0]
We review current research across multiple disciplines and identify key dimensions of accountability in the context of AI.
We reveal six themes with 13 corresponding dimensions and additional accountability facilitators.
arXiv Detail & Related papers (2024-10-05T18:08:39Z) - Attributing Responsibility in AI-Induced Incidents: A Computational Reflective Equilibrium Framework for Accountability [13.343937277604892]
The pervasive integration of Artificial Intelligence (AI) has introduced complex challenges in the responsibility and accountability in the event of incidents involving AI-enabled systems.
This work proposes a coherent and ethically acceptable responsibility attribution framework for all stakeholders.
arXiv Detail & Related papers (2024-04-25T18:11:03Z) - Disentangling the Causes of Plasticity Loss in Neural Networks [55.23250269007988]
We show that loss of plasticity can be decomposed into multiple independent mechanisms.
We show that a combination of layer normalization and weight decay is highly effective at maintaining plasticity in a variety of synthetic nonstationary learning tasks.
arXiv Detail & Related papers (2024-02-29T00:02:33Z) - Quantum-Inspired Analysis of Neural Network Vulnerabilities: The Role of
Conjugate Variables in System Attacks [54.565579874913816]
Neural networks demonstrate inherent vulnerability to small, non-random perturbations, emerging as adversarial attacks.
A mathematical congruence manifests between this mechanism and the quantum physics' uncertainty principle, casting light on a hitherto unanticipated interdisciplinarity.
arXiv Detail & Related papers (2024-02-16T02:11:27Z) - Causal Reinforcement Learning: A Survey [57.368108154871]
Reinforcement learning is an essential paradigm for solving sequential decision problems under uncertainty.
One of the main obstacles is that reinforcement learning agents lack a fundamental understanding of the world.
Causality offers a notable advantage as it can formalize knowledge in a systematic manner.
arXiv Detail & Related papers (2023-07-04T03:00:43Z) - Interpreting Neural Policies with Disentangled Tree Representations [58.769048492254555]
We study interpretability of compact neural policies through the lens of disentangled representation.
We leverage decision trees to obtain factors of variation for disentanglement in robot learning.
We introduce interpretability metrics that measure disentanglement of learned neural dynamics.
arXiv Detail & Related papers (2022-10-13T01:10:41Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Making the Unaccountable Internet: The Changing Meaning of Accounting in
the Early ARPANET [2.6397379133308214]
This paper offers a critique of technologically essentialist notions of accountability and the characterization of the "unaccountable Internet" as an unintended consequence.
It explores the changing meaning of accounting and its relationship to accountability in a selected corpus of requests for comments concerning the early Internet's design from the 1970s and 80s.
arXiv Detail & Related papers (2022-01-28T01:42:58Z) - Accountability in AI: From Principles to Industry-specific Accreditation [4.033641609534416]
Recent AI-related scandals have shed a spotlight on accountability in AI.
This paper draws on literature from public policy and governance to make two contributions.
arXiv Detail & Related papers (2021-10-08T16:37:11Z) - A machine-learning software-systems approach to capture social,
regulatory, governance, and climate problems [0.0]
It will discuss the role of an artificially-intelligent computer system as critique-based, implicit-organizational, and an inherently necessary device, deployed in synchrony with parallel governmental policy, as a genuine means of capturing nation-population in quantitative form, public contentment in societal-cooperative economic groups, regulatory proposition, and governance-effectiveness domains.
arXiv Detail & Related papers (2020-02-23T13:00:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.