Auditing Work: Exploring the New York City algorithmic bias audit regime
- URL: http://arxiv.org/abs/2402.08101v1
- Date: Mon, 12 Feb 2024 22:37:15 GMT
- Title: Auditing Work: Exploring the New York City algorithmic bias audit regime
- Authors: Lara Groves, Jacob Metcalf, Alayna Kennedy, Briana Vecchione, and
Andrew Strait
- Abstract summary: Local Law 144 (LL 144) mandates NYC-based employers using automated employment decision-making tools (AEDTs) in hiring to undergo annual bias audits conducted by an independent auditor.
This paper examines lessons from LL 144 for other national algorithm auditing attempts.
- Score: 0.4580134784455941
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In July 2023, New York City (NYC) initiated the first algorithm auditing
system for commercial machine-learning systems. Local Law 144 (LL 144) mandates
NYC-based employers using automated employment decision-making tools (AEDTs) in
hiring to undergo annual bias audits conducted by an independent auditor. This
paper examines lessons from LL 144 for other national algorithm auditing
attempts. Through qualitative interviews with 16 experts and practitioners
within the regime, we find that LL 144 has not effectively established an
auditing regime. The law fails to clearly define key aspects, such as AEDTs and
independent auditors, leading auditors, AEDT vendors, and companies using AEDTs
to define the law's practical implementation in ways that failed to protect job
applicants. Contributing factors include the law's flawed transparency-driven
theory of change, industry lobbying narrowing the definition of AEDTs,
practical and cultural challenges faced by auditors in accessing data, and wide
disagreement over what constitutes a legitimate auditor, resulting in four
distinct 'auditor roles.' We conclude with four recommendations for
policymakers seeking to create similar bias auditing regimes, emphasizing
clearer definitions, metrics, and increased accountability. By exploring LL 144
through the lens of auditors, our paper advances the evidence base around audit
as an accountability mechanism, providing guidance for policymakers seeking to
create similar regimes.
Related papers
- Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability [0.7684035229968342]
In July 2023, New York City became the first jurisdiction globally to mandate bias audits for commercial algorithmic systems.
LL 144 requires AEDTs to be independently audited annually for race and gender bias.
In this study, 155 student investigators recorded 391 employers' compliance with LL 144 and the user experience for prospective job applicants.
arXiv Detail & Related papers (2024-06-03T15:01:20Z) - Pragmatic auditing: a pilot-driven approach for auditing Machine Learning systems [5.26895401335509]
We present a respective procedure that extends the AI-HLEG guidelines published by the European Commission.
Our audit procedure is based on an ML lifecycle model that explicitly focuses on documentation, accountability, and quality assurance.
We describe two pilots conducted on real-world use cases from two different organisations.
arXiv Detail & Related papers (2024-05-21T20:40:37Z) - A Game-Theoretic Analysis of Auditing Differentially Private Algorithms with Epistemically Disparate Herd [16.10098472773814]
This study examines the impact of herd audits on algorithm developers using the Stackelberg game approach.
By enhancing transparency and accountability, herd audit contributes to the responsible development of privacy-preserving algorithms.
arXiv Detail & Related papers (2024-04-24T20:34:27Z) - The Decisive Power of Indecision: Low-Variance Risk-Limiting Audits and Election Contestation via Marginal Mark Recording [51.82772358241505]
Risk-limiting audits (RLAs) are techniques for verifying the outcomes of large elections.
We define new families of audits that improve efficiency and offer advances in statistical power.
New audits are enabled by revisiting the standard notion of a cast-vote record so that it can declare multiple possible mark interpretations.
arXiv Detail & Related papers (2024-02-09T16:23:54Z) - A Framework for Assurance Audits of Algorithmic Systems [2.2342503377379725]
We propose the criterion audit as an operationalizable compliance and assurance external audit framework.
We argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that harms and uphold human values.
We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing.
arXiv Detail & Related papers (2024-01-26T14:38:54Z) - Who Audits the Auditors? Recommendations from a field scan of the
algorithmic auditing ecosystem [0.971392598996499]
We provide the first comprehensive field scan of the AI audit ecosystem.
We identify emerging best practices as well as methods and tools that are becoming commonplace.
We outline policy recommendations to improve the quality and impact of these audits.
arXiv Detail & Related papers (2023-10-04T01:40:03Z) - Multi-Scenario Empirical Assessment of Agile Governance Theory: A
Technical Report [55.2480439325792]
Agile Governance Theory (AGT) has emerged as a potential model for organizational chains of responsibility across business units and teams.
This study aims to assess how AGT is reflected in practice.
arXiv Detail & Related papers (2023-07-03T18:50:36Z) - The right to audit and power asymmetries in algorithm auditing [68.8204255655161]
We elaborate on the challenges and asymmetries mentioned by Sandvig at the IC2S2 2021.
We also contribute a discussion of the asymmetries that were not covered by Sandvig.
We discuss the implications these asymmetries have for algorithm auditing research.
arXiv Detail & Related papers (2023-02-16T13:57:41Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - Towards a multi-stakeholder value-based assessment framework for
algorithmic systems [76.79703106646967]
We develop a value-based assessment framework that visualizes closeness and tensions between values.
We give guidelines on how to operationalize them, while opening up the evaluation and deliberation process to a wide range of stakeholders.
arXiv Detail & Related papers (2022-05-09T19:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.