Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability
- URL: http://arxiv.org/abs/2406.01399v1
- Date: Mon, 3 Jun 2024 15:01:20 GMT
- Title: Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability
- Authors: Lucas Wright, Roxana Mike Muenster, Briana Vecchione, Tianyao Qu, Pika, Cai, COMM/INFO 2450 Student Investigators, Jacob Metcalf, J. Nathan Matias,
- Abstract summary: In July 2023, New York City became the first jurisdiction globally to mandate bias audits for commercial algorithmic systems.
LL 144 requires AEDTs to be independently audited annually for race and gender bias.
In this study, 155 student investigators recorded 391 employers' compliance with LL 144 and the user experience for prospective job applicants.
- Score: 0.7684035229968342
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In July 2023, New York City became the first jurisdiction globally to mandate bias audits for commercial algorithmic systems, specifically for automated employment decisions systems (AEDTs) used in hiring and promotion. Local Law 144 (LL 144) requires AEDTs to be independently audited annually for race and gender bias, and the audit report must be publicly posted. Additionally, employers are obligated to post a transparency notice with the job listing. In this study, 155 student investigators recorded 391 employers' compliance with LL 144 and the user experience for prospective job applicants. Among these employers, 18 posted audit reports and 13 posted transparency notices. These rates could potentially be explained by a significant limitation in the accountability mechanisms enacted by LL 144. Since the law grants employers substantial discretion over whether their system is in scope of the law, a null result cannot be said to indicate non-compliance, a condition we call ``null compliance." Employer discretion may also explain our finding that nearly all audits reported an impact factor over 0.8, a rule of thumb often used in employment discrimination cases. We also find that the benefit of LL 144 to ordinary job seekers is limited due to shortcomings in accessibility and usability. Our findings offer important lessons for policy-makers as they consider regulating algorithmic systems, particularly the degree of discretion to grant to regulated parties and the limitations of relying on transparency and end-user accountability.
Related papers
- Usefulness of LLMs as an Author Checklist Assistant for Scientific Papers: NeurIPS'24 Experiment [59.09144776166979]
Large language models (LLMs) represent a promising, but controversial, tool in aiding scientific peer review.
This study evaluates the usefulness of LLMs in a conference setting as a tool for vetting paper submissions against submission standards.
arXiv Detail & Related papers (2024-11-05T18:58:00Z) - From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing [1.196505602609637]
Audits can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing.
There are many operational challenges to AI auditing that complicate its implementation.
We argue that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
arXiv Detail & Related papers (2024-10-07T06:15:46Z) - RegNLP in Action: Facilitating Compliance Through Automated Information Retrieval and Answer Generation [51.998738311700095]
Regulatory documents, characterized by their length, complexity and frequent updates, are challenging to interpret.
RegNLP is a multidisciplinary subfield aimed at simplifying access to and interpretation of regulatory rules and obligations.
ObliQA dataset contains 27,869 questions derived from the Abu Dhabi Global Markets (ADGM) financial regulation document collection.
arXiv Detail & Related papers (2024-09-09T14:44:19Z) - Peer-induced Fairness: A Causal Approach for Algorithmic Fairness Auditing [0.0]
The European Union's Artificial Intelligence Act takes effect on 1 August 2024.
High-risk AI applications must adhere to stringent transparency and fairness standards.
We propose a novel framework, which combines the strengths of counterfactual fairness and peer comparison strategy.
arXiv Detail & Related papers (2024-08-05T15:35:34Z) - Auditing Work: Exploring the New York City algorithmic bias audit regime [0.4580134784455941]
Local Law 144 (LL 144) mandates NYC-based employers using automated employment decision-making tools (AEDTs) in hiring to undergo annual bias audits conducted by an independent auditor.
This paper examines lessons from LL 144 for other national algorithm auditing attempts.
arXiv Detail & Related papers (2024-02-12T22:37:15Z) - The Decisive Power of Indecision: Low-Variance Risk-Limiting Audits and Election Contestation via Marginal Mark Recording [51.82772358241505]
Risk-limiting audits (RLAs) are techniques for verifying the outcomes of large elections.
We define new families of audits that improve efficiency and offer advances in statistical power.
New audits are enabled by revisiting the standard notion of a cast-vote record so that it can declare multiple possible mark interpretations.
arXiv Detail & Related papers (2024-02-09T16:23:54Z) - A Framework for Assurance Audits of Algorithmic Systems [2.2342503377379725]
We propose the criterion audit as an operationalizable compliance and assurance external audit framework.
We argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that harms and uphold human values.
We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing.
arXiv Detail & Related papers (2024-01-26T14:38:54Z) - Who Audits the Auditors? Recommendations from a field scan of the
algorithmic auditing ecosystem [0.971392598996499]
We provide the first comprehensive field scan of the AI audit ecosystem.
We identify emerging best practices as well as methods and tools that are becoming commonplace.
We outline policy recommendations to improve the quality and impact of these audits.
arXiv Detail & Related papers (2023-10-04T01:40:03Z) - Incentive-Theoretic Bayesian Inference for Collaborative Science [59.15962177829337]
We study hypothesis testing when there is an agent with a private prior about an unknown parameter.
We show how the principal can conduct statistical inference that leverages the information that is revealed by an agent's strategic behavior.
arXiv Detail & Related papers (2023-07-07T17:59:01Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.