What we learned while automating bias detection in AI hiring systems for compliance with NYC Local Law 144
- URL: http://arxiv.org/abs/2501.10371v1
- Date: Fri, 13 Dec 2024 14:14:26 GMT
- Title: What we learned while automating bias detection in AI hiring systems for compliance with NYC Local Law 144
- Authors: Gemma Galdon Clavell, Rubén González-Sendino,
- Abstract summary: New York City's Local Law 144 requires employers to conduct independent bias audits for any automated employment decision tools (AEDTs) used in hiring processes.<n>The law outlines a minimum set of bias tests that AI developers and implementers must perform to ensure compliance.<n>We have collected and analyzed audits conducted under this law, identified best practices, and developed a software tool to streamline employer compliance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Since July 5, 2023, New York City's Local Law 144 requires employers to conduct independent bias audits for any automated employment decision tools (AEDTs) used in hiring processes. The law outlines a minimum set of bias tests that AI developers and implementers must perform to ensure compliance. Over the past few months, we have collected and analyzed audits conducted under this law, identified best practices, and developed a software tool to streamline employer compliance. Our tool, ITACA_144, tailors our broader bias auditing framework to meet the specific requirements of Local Law 144. While automating these legal mandates, we identified several critical challenges that merit attention to ensure AI bias regulations and audit methodologies are both effective and practical. This document presents the insights gained from automating compliance with NYC Local Law 144. It aims to support other cities and states in crafting similar legislation while addressing the limitations of the NYC framework. The discussion focuses on key areas including data requirements, demographic inclusiveness, impact ratios, effective bias, metrics, and data reliability.
Related papers
- Can LLMs Automate Fact-Checking Article Writing? [69.90165567819656]
We argue for the need to extend the typical automatic fact-checking pipeline with automatic generation of full fact-checking articles.
We develop QRAFT, an LLM-based agentic framework that mimics the writing workflow of human fact-checkers.
arXiv Detail & Related papers (2025-03-22T07:56:50Z) - Usefulness of LLMs as an Author Checklist Assistant for Scientific Papers: NeurIPS'24 Experiment [59.09144776166979]
Large language models (LLMs) represent a promising, but controversial, tool in aiding scientific peer review.
This study evaluates the usefulness of LLMs in a conference setting as a tool for vetting paper submissions against submission standards.
arXiv Detail & Related papers (2024-11-05T18:58:00Z) - RIRAG: Regulatory Information Retrieval and Answer Generation [51.998738311700095]
We introduce a task of generating question-passages pairs, where questions are automatically created and paired with relevant regulatory passages.<n>We create the ObliQA dataset, containing 27,869 questions derived from the collection of Abu Dhabi Global Markets (ADGM) financial regulation documents.<n>We design a baseline Regulatory Information Retrieval and Answer Generation (RIRAG) system and evaluate it with RePASs, a novel evaluation metric.
arXiv Detail & Related papers (2024-09-09T14:44:19Z) - Null Compliance: NYC Local Law 144 and the Challenges of Algorithm Accountability [0.7684035229968342]
In July 2023, New York City became the first jurisdiction globally to mandate bias audits for commercial algorithmic systems.
LL 144 requires AEDTs to be independently audited annually for race and gender bias.
In this study, 155 student investigators recorded 391 employers' compliance with LL 144 and the user experience for prospective job applicants.
arXiv Detail & Related papers (2024-06-03T15:01:20Z) - Rethinking Legal Compliance Automation: Opportunities with Large Language Models [2.9088208525097365]
We argue that the examination of (textual) legal artifacts should, first employ broader context than sentences.
We present a compliance analysis approach designed to address these limitations.
arXiv Detail & Related papers (2024-04-22T17:10:27Z) - Auditing Work: Exploring the New York City algorithmic bias audit regime [0.4580134784455941]
Local Law 144 (LL 144) mandates NYC-based employers using automated employment decision-making tools (AEDTs) in hiring to undergo annual bias audits conducted by an independent auditor.
This paper examines lessons from LL 144 for other national algorithm auditing attempts.
arXiv Detail & Related papers (2024-02-12T22:37:15Z) - A Framework for Assurance Audits of Algorithmic Systems [2.2342503377379725]
We propose the criterion audit as an operationalizable compliance and assurance external audit framework.
We argue that AI audits should similarly provide assurance to their stakeholders about AI organizations' ability to govern their algorithms in ways that harms and uphold human values.
We conclude by offering a critical discussion on the benefits, inherent limitations, and implementation challenges of applying practices of the more mature financial auditing industry to AI auditing.
arXiv Detail & Related papers (2024-01-26T14:38:54Z) - The Ethics of Automating Legal Actors [58.81546227716182]
We argue that automating the role of the judge raises difficult ethical challenges, in particular for common law legal systems.
Our argument follows from the social role of the judge in actively shaping the law, rather than merely applying it.
Even in the case the models could achieve human-level capabilities, there would still be remaining ethical concerns inherent in the automation of the legal process.
arXiv Detail & Related papers (2023-12-01T13:48:46Z) - Local Law 144: A Critical Analysis of Regression Metrics [0.0]
In November 2021, the New York City Council passed a legislation that mandates bias audits of automated employment decision tools.
From 15th April 2023, companies that use automated tools for hiring or promoting employees are required to have these systems audited.
We argue that both metrics fail to capture distributional differences over the whole domain, and therefore cannot reliably detect bias.
arXiv Detail & Related papers (2023-02-08T15:21:14Z) - An Uncommon Task: Participatory Design in Legal AI [64.54460979588075]
We examine a notable yet understudied AI design process in the legal domain that took place over a decade ago.
We show how an interactive simulation methodology allowed computer scientists and lawyers to become co-designers.
arXiv Detail & Related papers (2022-03-08T15:46:52Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.