Making the Unaccountable Internet: The Changing Meaning of Accounting in
the Early ARPANET
- URL: http://arxiv.org/abs/2201.11884v3
- Date: Wed, 11 May 2022 17:34:05 GMT
- Title: Making the Unaccountable Internet: The Changing Meaning of Accounting in
the Early ARPANET
- Authors: A. Feder Cooper and Gili Vidan
- Abstract summary: This paper offers a critique of technologically essentialist notions of accountability and the characterization of the "unaccountable Internet" as an unintended consequence.
It explores the changing meaning of accounting and its relationship to accountability in a selected corpus of requests for comments concerning the early Internet's design from the 1970s and 80s.
- Score: 2.6397379133308214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contemporary concerns over the governance of technological systems often run
up against narratives about the technical infeasibility of designing mechanisms
for accountability. While in recent AI ethics literature these concerns have
been deliberated predominantly in relation to ML, other instances in computing
history also presented circumstances in which computer scientists needed to
un-muddle what it means to design accountable systems. One such compelling
narrative can be found in canonical histories of the Internet that highlight
how its original designers' commitment to the "End-to-End" architectural
principle precluded other features from being implemented, resulting in the
fast-growing, generative, but ultimately unaccountable network we have today.
This paper offers a critique of such technologically essentialist notions of
accountability and the characterization of the "unaccountable Internet" as an
unintended consequence. It explores the changing meaning of accounting and its
relationship to accountability in a selected corpus of requests for comments
(RFCs) concerning the early Internet's design from the 1970s and 80s. We
characterize four ways of conceptualizing accounting: as billing, as
measurement, as management, and as policy, and demonstrate how an understanding
of accountability was constituted through these shifting meanings. We link
together the administrative and technical mechanisms of accounting for shared
resources in a distributed system and an emerging notion of accountability as a
social, political, and technical category, arguing that the former is
constitutive of the latter. Recovering this history is not only important for
understanding the processes that shaped the Internet, but also serves as a
starting point for unpacking the complicated political choices that are
involved in designing accountability mechanisms for other technological systems
today.
Related papers
- CyberNFTs: Conceptualizing a decentralized and reward-driven intrusion detection system with ML [0.0]
The study employs an analytical and comparative methodology, examining the synergy between cutting-edge Web3 technologies and information security.
The proposed model incorporates blockchain concepts, cyber non-fungible token (cyberNFT) rewards, machine learning algorithms, and publish/subscribe architectures.
arXiv Detail & Related papers (2024-08-31T21:15:26Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Neuro-Symbolic Artificial Intelligence (AI) for Intent based Semantic
Communication [85.06664206117088]
6G networks must consider semantics and effectiveness (at end-user) of the data transmission.
NeSy AI is proposed as a pillar for learning causal structure behind the observed data.
GFlowNet is leveraged for the first time in a wireless system to learn the probabilistic structure which generates the data.
arXiv Detail & Related papers (2022-05-22T07:11:57Z) - Introduction to the Artificial Intelligence that can be applied to the
Network Automation Journey [68.8204255655161]
The "Intent-Based Networking - Concepts and Definitions" document describes the different parts of the ecosystem that could be involved in NetDevOps.
The recognize, generate intent, translate and refine features need a new way to implement algorithms.
arXiv Detail & Related papers (2022-04-02T08:12:08Z) - Accountability in an Algorithmic Society: Relationality, Responsibility,
and Robustness in Machine Learning [4.958893997693021]
In 1996, Nissenbaum issued a clarion call concerning the erosion of accountability in society due to the ubiquitous delegation of consequential functions to computerized systems.
We revisit Nissenbaum's original paper in relation to the ascendance of data-driven algorithmic systems.
We discuss how the barriers present difficulties for instantiating a unified moral, relational framework in practice for data-driven algorithmic systems.
arXiv Detail & Related papers (2022-02-10T21:39:02Z) - Accountability in AI: From Principles to Industry-specific Accreditation [4.033641609534416]
Recent AI-related scandals have shed a spotlight on accountability in AI.
This paper draws on literature from public policy and governance to make two contributions.
arXiv Detail & Related papers (2021-10-08T16:37:11Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Towards an accountable Internet of Things: A call for reviewability [5.607676459156789]
This chapter outlines aspects of accountability as they relate to the Internet of Things.
Specifically, we argue the urgent need for mechanisms that facilitate the review of IoT systems.
arXiv Detail & Related papers (2021-02-16T13:09:07Z) - Outlining Traceability: A Principle for Operationalizing Accountability
in Computing Systems [1.0152838128195467]
Traceability requires establishing not only how a system worked but how it was created and for what purpose.
Traceability connects records of how the system was constructed and what the system did mechanically to the broader goals of governance.
This map reframes existing discussions around accountability and transparency, using the principle of traceability to show how, when, and why transparency can be deployed to serve accountability goals.
arXiv Detail & Related papers (2021-01-23T00:13:20Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.