Auditing Work: Exploring the New York City algorithmic bias audit regime
- URL: http://arxiv.org/abs/2402.08101v1
- Date: Mon, 12 Feb 2024 22:37:15 GMT
- Title: Auditing Work: Exploring the New York City algorithmic bias audit regime
- Authors: Lara Groves, Jacob Metcalf, Alayna Kennedy, Briana Vecchione, and
Andrew Strait
- Abstract summary: Local Law 144 (LL 144) mandates NYC-based employers using automated employment decision-making tools (AEDTs) in hiring to undergo annual bias audits conducted by an independent auditor.
This paper examines lessons from LL 144 for other national algorithm auditing attempts.
- Score: 0.4580134784455941
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In July 2023, New York City (NYC) initiated the first algorithm auditing
system for commercial machine-learning systems. Local Law 144 (LL 144) mandates
NYC-based employers using automated employment decision-making tools (AEDTs) in
hiring to undergo annual bias audits conducted by an independent auditor. This
paper examines lessons from LL 144 for other national algorithm auditing
attempts. Through qualitative interviews with 16 experts and practitioners
within the regime, we find that LL 144 has not effectively established an
auditing regime. The law fails to clearly define key aspects, such as AEDTs and
independent auditors, leading auditors, AEDT vendors, and companies using AEDTs
to define the law's practical implementation in ways that failed to protect job
applicants. Contributing factors include the law's flawed transparency-driven
theory of change, industry lobbying narrowing the definition of AEDTs,
practical and cultural challenges faced by auditors in accessing data, and wide
disagreement over what constitutes a legitimate auditor, resulting in four
distinct 'auditor roles.' We conclude with four recommendations for
policymakers seeking to create similar bias auditing regimes, emphasizing
clearer definitions, metrics, and increased accountability. By exploring LL 144
through the lens of auditors, our paper advances the evidence base around audit
as an accountability mechanism, providing guidance for policymakers seeking to
create similar regimes.
Related papers
- Usefulness of LLMs as an Author Checklist Assistant for Scientific Papers: NeurIPS'24 Experiment [59.09144776166979]
Large language models (LLMs) represent a promising, but controversial, tool in aiding scientific peer review.
This study evaluates the usefulness of LLMs in a conference setting as a tool for vetting paper submissions against submission standards.
arXiv Detail & Related papers (2024-11-05T18:58:00Z) - From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing [1.196505602609637]
Audits can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing.
There are many operational challenges to AI auditing that complicate its implementation.
We argue that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
arXiv Detail & Related papers (2024-10-07T06:15:46Z) - RegNLP in Action: Facilitating Compliance Through Automated Information Retrieval and Answer Generation [51.998738311700095]
Regulatory documents, characterized by their length, complexity and frequent updates, are challenging to interpret.
RegNLP is a multidisciplinary subfield aimed at simplifying access to and interpretation of regulatory rules and obligations.
ObliQA dataset contains 27,869 questions derived from the Abu Dhabi Global Markets (ADGM) financial regulation document collection.
arXiv Detail & Related papers (2024-09-09T14:44:19Z) - Analysis of the ICML 2023 Ranking Data: Can Authors' Opinions of Their Own Papers Assist Peer Review in Machine Learning? [52.00419656272129]
We conducted an experiment during the 2023 International Conference on Machine Learning (ICML)
We received 1,342 rankings, each from a distinct author, pertaining to 2,592 submissions.
We focus on the Isotonic Mechanism, which calibrates raw review scores using author-provided rankings.
arXiv Detail & Related papers (2024-08-24T01:51:23Z) - Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability [0.7684035229968342]
In July 2023, New York City became the first jurisdiction globally to mandate bias audits for commercial algorithmic systems.
LL 144 requires AEDTs to be independently audited annually for race and gender bias.
In this study, 155 student investigators recorded 391 employers' compliance with LL 144 and the user experience for prospective job applicants.
arXiv Detail & Related papers (2024-06-03T15:01:20Z) - A Game-Theoretic Analysis of Auditing Differentially Private Algorithms with Epistemically Disparate Herd [16.10098472773814]
This study examines the impact of herd audits on algorithm developers using the Stackelberg game approach.
By enhancing transparency and accountability, herd audit contributes to the responsible development of privacy-preserving algorithms.
arXiv Detail & Related papers (2024-04-24T20:34:27Z) - The Decisive Power of Indecision: Low-Variance Risk-Limiting Audits and Election Contestation via Marginal Mark Recording [51.82772358241505]
Risk-limiting audits (RLAs) are techniques for verifying the outcomes of large elections.
We define new families of audits that improve efficiency and offer advances in statistical power.
New audits are enabled by revisiting the standard notion of a cast-vote record so that it can declare multiple possible mark interpretations.
arXiv Detail & Related papers (2024-02-09T16:23:54Z) - A Framework for Assurance Audits of Algorithmic Systems [2.2342503377379725]
We propose the criterion audit as an operationalizable compliance and assurance external audit framework.
We argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that harms and uphold human values.
We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing.
arXiv Detail & Related papers (2024-01-26T14:38:54Z) - Who Audits the Auditors? Recommendations from a field scan of the
algorithmic auditing ecosystem [0.971392598996499]
We provide the first comprehensive field scan of the AI audit ecosystem.
We identify emerging best practices as well as methods and tools that are becoming commonplace.
We outline policy recommendations to improve the quality and impact of these audits.
arXiv Detail & Related papers (2023-10-04T01:40:03Z) - Multi-Scenario Empirical Assessment of Agile Governance Theory: A
Technical Report [55.2480439325792]
Agile Governance Theory (AGT) has emerged as a potential model for organizational chains of responsibility across business units and teams.
This study aims to assess how AGT is reflected in practice.
arXiv Detail & Related papers (2023-07-03T18:50:36Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.