Dynamic Documentation for AI Systems
- URL: http://arxiv.org/abs/2303.10854v1
- Date: Mon, 20 Mar 2023 04:23:07 GMT
- Title: Dynamic Documentation for AI Systems
- Authors: Soham Mehta, Anderson Rogers and Thomas Krendl Gilbert
- Abstract summary: We show the limits of present documentation protocols for AI systems.
We argue for dynamic documentation as a new paradigm for understanding and evaluating AI systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI documentation is a rapidly-growing channel for coordinating the design of
AI technologies with policies for transparency and accessibility. Calls to
standardize and enact documentation of algorithmic harms and impacts are now
commonplace. However, documentation standards for AI remain inchoate, and fail
to match the capabilities and social effects of increasingly impactful
architectures such as Large Language Models (LLMs). In this paper, we show the
limits of present documentation protocols, and argue for dynamic documentation
as a new paradigm for understanding and evaluating AI systems. We first review
canonical approaches to system documentation outside the context of AI,
focusing on the complex history of Environmental Impact Statements (EISs). We
next compare critical elements of the EIS framework to present challenges with
algorithmic documentation, which have inherited the limitations of EISs without
incorporating their strengths. These challenges are specifically illustrated
through the growing popularity of Model Cards and two case studies of
algorithmic impact assessment in China and Canada. Finally, we evaluate more
recent proposals, including Reward Reports, as potential components of fully
dynamic AI documentation protocols.
Related papers
- SONAR: A Synthetic AI-Audio Detection Framework and Benchmark [59.09338266364506]
SONAR is a synthetic AI-Audio Detection Framework and Benchmark.
It aims to provide a comprehensive evaluation for distinguishing cutting-edge AI-synthesized auditory content.
It is the first framework to uniformly benchmark AI-audio detection across both traditional and foundation model-based deepfake detection systems.
arXiv Detail & Related papers (2024-10-06T01:03:42Z) - AI Cards: Towards an Applied Framework for Machine-Readable AI and Risk Documentation Inspired by the EU AI Act [2.1897070577406734]
Despite its importance, there is a lack of standards and guidelines to assist with drawing up AI and risk documentation aligned with the AI Act.
We propose AI Cards as a novel holistic framework for representing a given intended use of an AI system.
arXiv Detail & Related papers (2024-06-26T09:51:49Z) - Coordinated Flaw Disclosure for AI: Beyond Security Vulnerabilities [1.3225694028747144]
We propose a Coordinated Flaw Disclosure framework tailored to the complexities of machine learning (ML) issues.
Our framework introduces innovations such as extended model cards, dynamic scope expansion, an independent adjudication panel, and an automated verification process.
We argue that CFD could significantly enhance public trust in AI systems.
arXiv Detail & Related papers (2024-02-10T20:39:04Z) - Levels of AGI for Operationalizing Progress on the Path to AGI [64.59151650272477]
We propose a framework for classifying the capabilities and behavior of Artificial General Intelligence (AGI) models and their precursors.
This framework introduces levels of AGI performance, generality, and autonomy, providing a common language to compare models, assess risks, and measure progress along the path to AGI.
arXiv Detail & Related papers (2023-11-04T17:44:58Z) - Use case cards: a use case reporting framework inspired by the European
AI Act [0.0]
We propose a new framework for the documentation of use cases, that we call "use case cards"
Unlike other documentation methodologies, we focus on the purpose and operational use of an AI system.
The proposed framework is the result of a co-design process involving a relevant team of EU policy experts and scientists.
arXiv Detail & Related papers (2023-06-23T15:47:19Z) - Guiding AI-Generated Digital Content with Wireless Perception [69.51950037942518]
We introduce an integration of wireless perception with AI-generated content (AIGC) to improve the quality of digital content production.
The framework employs a novel multi-scale perception technology to read user's posture, which is difficult to describe accurately in words, and transmits it to the AIGC model as skeleton images.
Since the production process imposes the user's posture as a constraint on the AIGC model, it makes the generated content more aligned with the user's requirements.
arXiv Detail & Related papers (2023-03-26T04:39:03Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - On the Importance of Domain-specific Explanations in AI-based
Cybersecurity Systems (Technical Report) [7.316266670238795]
Lack of understanding of such decisions can be a major drawback in critical domains such as those related to cybersecurity.
In this paper we make three contributions: (i) proposal and discussion of desiderata for the explanation of outputs generated by AI-based cybersecurity systems; (ii) a comparative analysis of approaches in the literature on Explainable Artificial Intelligence (XAI) under the lens of both our desiderata and further dimensions that are typically used for examining XAI approaches; and (iii) a general architecture that can serve as a roadmap for guiding research efforts towards the development of explainable AI-based cybersecurity systems.
arXiv Detail & Related papers (2021-08-02T22:55:13Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z) - A Methodology for Creating AI FactSheets [67.65802440158753]
This paper describes a methodology for creating the form of AI documentation we call FactSheets.
Within each step of the methodology, we describe the issues to consider and the questions to explore.
This methodology will accelerate the broader adoption of transparent AI documentation.
arXiv Detail & Related papers (2020-06-24T15:08:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.