Who Audits the Auditors? Recommendations from a field scan of the
algorithmic auditing ecosystem
- URL: http://arxiv.org/abs/2310.02521v1
- Date: Wed, 4 Oct 2023 01:40:03 GMT
- Title: Who Audits the Auditors? Recommendations from a field scan of the
algorithmic auditing ecosystem
- Authors: Sasha Costanza-Chock, Emma Harvey, Inioluwa Deborah Raji, Martha
Czernuszenko, Joy Buolamwini
- Abstract summary: We provide the first comprehensive field scan of the AI audit ecosystem.
We identify emerging best practices as well as methods and tools that are becoming commonplace.
We outline policy recommendations to improve the quality and impact of these audits.
- Score: 0.971392598996499
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI audits are an increasingly popular mechanism for algorithmic
accountability; however, they remain poorly defined. Without a clear
understanding of audit practices, let alone widely used standards or regulatory
guidance, claims that an AI product or system has been audited, whether by
first-, second-, or third-party auditors, are difficult to verify and may
exacerbate, rather than mitigate, bias and harm. To address this knowledge gap,
we provide the first comprehensive field scan of the AI audit ecosystem. We
share a catalog of individuals (N=438) and organizations (N=189) who engage in
algorithmic audits or whose work is directly relevant to algorithmic audits;
conduct an anonymous survey of the group (N=152); and interview industry
leaders (N=10). We identify emerging best practices as well as methods and
tools that are becoming commonplace, and enumerate common barriers to
leveraging algorithmic audits as effective accountability mechanisms. We
outline policy recommendations to improve the quality and impact of these
audits, and highlight proposals with wide support from algorithmic auditors as
well as areas of debate. Our recommendations have implications for lawmakers,
regulators, internal company policymakers, and standards-setting bodies, as
well as for auditors. They are: 1) require the owners and operators of AI
systems to engage in independent algorithmic audits against clearly defined
standards; 2) notify individuals when they are subject to algorithmic
decision-making systems; 3) mandate disclosure of key components of audit
findings for peer review; 4) consider real-world harm in the audit process,
including through standardized harm incident reporting and response mechanisms;
5) directly involve the stakeholders most likely to be harmed by AI systems in
the algorithmic audit process; and 6) formalize evaluation and, potentially,
accreditation of algorithmic auditors.
Related papers
- From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing [1.196505602609637]
Audits can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing.
There are many operational challenges to AI auditing that complicate its implementation.
We argue that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
arXiv Detail & Related papers (2024-10-07T06:15:46Z) - A Game-Theoretic Analysis of Auditing Differentially Private Algorithms with Epistemically Disparate Herd [16.10098472773814]
This study examines the impact of herd audits on algorithm developers using the Stackelberg game approach.
By enhancing transparency and accountability, herd audit contributes to the responsible development of privacy-preserving algorithms.
arXiv Detail & Related papers (2024-04-24T20:34:27Z) - Auditing Work: Exploring the New York City algorithmic bias audit regime [0.4580134784455941]
Local Law 144 (LL 144) mandates NYC-based employers using automated employment decision-making tools (AEDTs) in hiring to undergo annual bias audits conducted by an independent auditor.
This paper examines lessons from LL 144 for other national algorithm auditing attempts.
arXiv Detail & Related papers (2024-02-12T22:37:15Z) - The Decisive Power of Indecision: Low-Variance Risk-Limiting Audits and Election Contestation via Marginal Mark Recording [51.82772358241505]
Risk-limiting audits (RLAs) are techniques for verifying the outcomes of large elections.
We define new families of audits that improve efficiency and offer advances in statistical power.
New audits are enabled by revisiting the standard notion of a cast-vote record so that it can declare multiple possible mark interpretations.
arXiv Detail & Related papers (2024-02-09T16:23:54Z) - A Framework for Assurance Audits of Algorithmic Systems [2.2342503377379725]
We propose the criterion audit as an operationalizable compliance and assurance external audit framework.
We argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that harms and uphold human values.
We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing.
arXiv Detail & Related papers (2024-01-26T14:38:54Z) - The right to audit and power asymmetries in algorithm auditing [68.8204255655161]
We elaborate on the challenges and asymmetries mentioned by Sandvig at the IC2S2 2021.
We also contribute a discussion of the asymmetries that were not covered by Sandvig.
We discuss the implications these asymmetries have for algorithm auditing research.
arXiv Detail & Related papers (2023-02-16T13:57:41Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - Outsider Oversight: Designing a Third Party Audit Ecosystem for AI
Governance [3.8997087223115634]
We discuss the challenges of third party oversight in the current AI landscape.
We show that the institutional design of such audits are far from monolithic.
We conclude that the turn toward audits alone is unlikely to achieve actual algorithmic accountability.
arXiv Detail & Related papers (2022-06-09T19:18:47Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.