Outlining Traceability: A Principle for Operationalizing Accountability
in Computing Systems
- URL: http://arxiv.org/abs/2101.09385v1
- Date: Sat, 23 Jan 2021 00:13:20 GMT
- Title: Outlining Traceability: A Principle for Operationalizing Accountability
in Computing Systems
- Authors: Joshua A. Kroll
- Abstract summary: Traceability requires establishing not only how a system worked but how it was created and for what purpose.
Traceability connects records of how the system was constructed and what the system did mechanically to the broader goals of governance.
This map reframes existing discussions around accountability and transparency, using the principle of traceability to show how, when, and why transparency can be deployed to serve accountability goals.
- Score: 1.0152838128195467
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accountability is widely understood as a goal for well governed computer
systems, and is a sought-after value in many governance contexts. But how can
it be achieved? Recent work on standards for governable artificial intelligence
systems offers a related principle: traceability. Traceability requires
establishing not only how a system worked but how it was created and for what
purpose, in a way that explains why a system has particular dynamics or
behaviors. It connects records of how the system was constructed and what the
system did mechanically to the broader goals of governance, in a way that
highlights human understanding of that mechanical operation and the decision
processes underlying it. We examine the various ways in which the principle of
traceability has been articulated in AI principles and other policy documents
from around the world, distill from these a set of requirements on software
systems driven by the principle, and systematize the technologies available to
meet those requirements. From our map of requirements to supporting tools,
techniques, and procedures, we identify gaps and needs separating what
traceability requires from the toolbox available for practitioners. This map
reframes existing discussions around accountability and transparency, using the
principle of traceability to show how, when, and why transparency can be
deployed to serve accountability goals and thereby improve the normative
fidelity of systems and their development processes.
Related papers
- Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Path To Gain Functional Transparency In Artificial Intelligence With
Meaningful Explainability [0.0]
As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial.
We propose a design for user-centered compliant-by-design transparency in transparent systems.
By providing a comprehensive understanding of the challenges associated with transparency in AI systems, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.
arXiv Detail & Related papers (2023-10-13T04:25:30Z) - Data-Centric Governance [6.85316573653194]
Current AI governance approaches consist mainly of manual review and documentation processes.
Modern AI systems are data-centric: they act on data, produce data, and are built through data engineering.
This work explores the systematization of governance requirements via datasets and algorithmic evaluations.
arXiv Detail & Related papers (2023-02-14T07:22:32Z) - FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness,
Accountability and Transparency Algorithms in Predictive Systems [69.24490096929709]
We developed an open source Python package called FAT Forensics.
It can inspect important fairness, accountability and transparency aspects of predictive algorithms.
Our toolbox can evaluate all elements of a predictive pipeline.
arXiv Detail & Related papers (2022-09-08T13:25:02Z) - Introduction to the Artificial Intelligence that can be applied to the
Network Automation Journey [68.8204255655161]
The "Intent-Based Networking - Concepts and Definitions" document describes the different parts of the ecosystem that could be involved in NetDevOps.
The recognize, generate intent, translate and refine features need a new way to implement algorithms.
arXiv Detail & Related papers (2022-04-02T08:12:08Z) - Making the Unaccountable Internet: The Changing Meaning of Accounting in
the Early ARPANET [2.6397379133308214]
This paper offers a critique of technologically essentialist notions of accountability and the characterization of the "unaccountable Internet" as an unintended consequence.
It explores the changing meaning of accounting and its relationship to accountability in a selected corpus of requests for comments concerning the early Internet's design from the 1970s and 80s.
arXiv Detail & Related papers (2022-01-28T01:42:58Z) - Accountability in AI: From Principles to Industry-specific Accreditation [4.033641609534416]
Recent AI-related scandals have shed a spotlight on accountability in AI.
This paper draws on literature from public policy and governance to make two contributions.
arXiv Detail & Related papers (2021-10-08T16:37:11Z) - Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning [62.997667081978825]
We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
arXiv Detail & Related papers (2021-09-10T09:10:20Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - A Structured Approach to Trustworthy Autonomous/Cognitive Systems [4.56877715768796]
There is no generally accepted approach to ensure trustworthiness.
This paper presents a framework to exactly fill this gap.
It proposes a reference lifecycle as a structured approach that is based on current safety standards.
arXiv Detail & Related papers (2020-02-19T14:36:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.