A Blueprint for Auditing Generative AI
- URL: http://arxiv.org/abs/2407.05338v1
- Date: Sun, 7 Jul 2024 11:56:54 GMT
- Title: A Blueprint for Auditing Generative AI
- Authors: Jakob Mokander, Justin Curl, Mihir Kshirsagar,
- Abstract summary: generative AI systems display emergent capabilities and are adaptable to a wide range of downstream tasks.
Existing auditing procedures fail to address the governance challenges posed by generative AI systems.
We propose a three-layered approach, whereby governance audits of technology providers that design and disseminate generative AI systems, model audits of generative AI systems after pre-training but prior to their release, and application audits of applications based on top of generative AI systems.
- Score: 0.9999629695552196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread use of generative AI systems is coupled with significant ethical and social challenges. As a result, policymakers, academic researchers, and social advocacy groups have all called for such systems to be audited. However, existing auditing procedures fail to address the governance challenges posed by generative AI systems, which display emergent capabilities and are adaptable to a wide range of downstream tasks. In this chapter, we address that gap by outlining a novel blueprint for how to audit such systems. Specifically, we propose a three-layered approach, whereby governance audits (of technology providers that design and disseminate generative AI systems), model audits (of generative AI systems after pre-training but prior to their release), and application audits (of applications based on top of generative AI systems) complement and inform each other. We show how audits on these three levels, when conducted in a structured and coordinated manner, can be a feasible and effective mechanism for identifying and managing some of the ethical and social risks posed by generative AI systems. That said, it is important to remain realistic about what auditing can reasonably be expected to achieve. For this reason, the chapter also discusses the limitations not only of our three-layered approach but also of the prospect of auditing generative AI systems at all. Ultimately, this chapter seeks to expand the methodological toolkit available to technology providers and policymakers who wish to analyse and evaluate generative AI systems from technical, ethical, and legal perspectives.
Related papers
- Assessing the Auditability of AI-integrating Systems: A Framework and Learning Analytics Case Study [0.0]
We argue that the efficacy of an audit depends on the auditability of the audited system.
We present a framework for assessing the auditability of AI-integrating systems.
arXiv Detail & Related papers (2024-10-29T13:43:21Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Auditing of AI: Legal, Ethical and Technical Approaches [0.0]
AI auditing is a rapidly growing field of research and practice.
Different approaches to AI auditing have different affordances and constraints.
The next step in the evolution of auditing as an AI governance mechanism should be the interlinking of these available approaches.
arXiv Detail & Related papers (2024-07-07T12:49:58Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Advancing AI Audits for Enhanced AI Governance [1.875782323187985]
This policy recommendation summarizes the issues related to the auditing of AI services and systems.
It presents three recommendations for promoting AI auditing that contribute to sound AI governance.
arXiv Detail & Related papers (2023-11-26T16:18:17Z) - Predictable Artificial Intelligence [77.1127726638209]
This paper introduces the ideas and challenges of Predictable AI.
It explores the ways in which we can anticipate key validity indicators of present and future AI ecosystems.
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Auditing large language models: a three-layered approach [0.0]
Large language models (LLMs) represent a major advance in artificial intelligence (AI) research.
LLMs are also coupled with significant ethical and social challenges.
Previous research has pointed towards auditing as a promising governance mechanism.
arXiv Detail & Related papers (2023-02-16T18:55:21Z) - Outsider Oversight: Designing a Third Party Audit Ecosystem for AI
Governance [3.8997087223115634]
We discuss the challenges of third party oversight in the current AI landscape.
We show that the institutional design of such audits are far from monolithic.
We conclude that the turn toward audits alone is unlikely to achieve actual algorithmic accountability.
arXiv Detail & Related papers (2022-06-09T19:18:47Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.