AI auditing: The Broken Bus on the Road to AI Accountability
- URL: http://arxiv.org/abs/2401.14462v1
- Date: Thu, 25 Jan 2024 19:00:29 GMT
- Title: AI auditing: The Broken Bus on the Road to AI Accountability
- Authors: Abeba Birhane, Ryan Steed, Victor Ojewale, Briana Vecchione, Inioluwa
Deborah Raji
- Abstract summary: "AI audit" ecosystem is muddled and imprecise, making it difficult to work through various concepts and map out the stakeholders involved in the practice.
First, we taxonomize current AI audit practices as completed by regulators, law firms, civil society, journalism, academia, consulting agencies.
We find that only a subset of AI audit studies translate to desired accountability outcomes.
- Score: 1.9758196889515185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the most concrete measures to take towards meaningful AI
accountability is to consequentially assess and report the systems' performance
and impact. However, the practical nature of the "AI audit" ecosystem is
muddled and imprecise, making it difficult to work through various concepts and
map out the stakeholders involved in the practice. First, we taxonomize current
AI audit practices as completed by regulators, law firms, civil society,
journalism, academia, consulting agencies. Next, we assess the impact of audits
done by stakeholders within each domain. We find that only a subset of AI audit
studies translate to desired accountability outcomes. We thus assess and
isolate practices necessary for effective AI audit results, articulating the
observed connections between AI audit design, methodology and institutional
context on its effectiveness as a meaningful mechanism for accountability.
Related papers
- From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing [1.196505602609637]
Audits can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing.
There are many operational challenges to AI auditing that complicate its implementation.
We argue that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
arXiv Detail & Related papers (2024-10-07T06:15:46Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Auditing of AI: Legal, Ethical and Technical Approaches [0.0]
AI auditing is a rapidly growing field of research and practice.
Different approaches to AI auditing have different affordances and constraints.
The next step in the evolution of auditing as an AI governance mechanism should be the interlinking of these available approaches.
arXiv Detail & Related papers (2024-07-07T12:49:58Z) - Operationalising AI governance through ethics-based auditing: An industry case study [0.0]
Ethics based auditing (EBA) is a structured process whereby an entitys past or present behaviour is assessed for consistency with moral principles or norms.
This article provides a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective.
arXiv Detail & Related papers (2024-07-07T12:22:38Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - A Framework for Assurance Audits of Algorithmic Systems [2.2342503377379725]
We propose the criterion audit as an operationalizable compliance and assurance external audit framework.
We argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that harms and uphold human values.
We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing.
arXiv Detail & Related papers (2024-01-26T14:38:54Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z) - Accountability in AI: From Principles to Industry-specific Accreditation [4.033641609534416]
Recent AI-related scandals have shed a spotlight on accountability in AI.
This paper draws on literature from public policy and governance to make two contributions.
arXiv Detail & Related papers (2021-10-08T16:37:11Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.