Why Fair Automated Hiring Systems Breach EU Non-Discrimination Law
- URL: http://arxiv.org/abs/2311.03900v1
- Date: Tue, 7 Nov 2023 11:31:00 GMT
- Title: Why Fair Automated Hiring Systems Breach EU Non-Discrimination Law
- Authors: Robert Lee Poe
- Abstract summary: Employment selection processes that use automated hiring systems based on machine learning are becoming increasingly commonplace.
Algorithmic fairness and algorithmic non-discrimination are not the same.
This article examines a conflict between the two: whether such hiring systems are compliant with EU non-discrimination law.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Employment selection processes that use automated hiring systems based on
machine learning are becoming increasingly commonplace. Meanwhile, concerns
about algorithmic direct and indirect discrimination that result from such
systems are front-and-center, and the technical solutions provided by the
research community often systematically deviate from the principle of equal
treatment to combat disparate or adverse impacts on groups based on protected
attributes. Those technical solutions are now being used in commercially
available automated hiring systems, potentially engaging in real-world
discrimination. Algorithmic fairness and algorithmic non-discrimination are not
the same. This article examines a conflict between the two: whether such hiring
systems are compliant with EU non-discrimination law.
Related papers
- Formalising Anti-Discrimination Law in Automated Decision Systems [1.560976479364936]
We study the legal challenges in automated decision-making by analysing conventional algorithmic fairness approaches.
By translating principles of anti-discrimination law into a decision-theoretic framework, we formalise discrimination.
We propose a new, legally informed approach to developing systems for automated decision-making.
arXiv Detail & Related papers (2024-06-29T10:59:21Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Unlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms [4.1221687771754]
EU legal concept of direct discrimination may apply to various algorithmic decision-making contexts.
Unlike indirect discrimination, there is generally no 'objective justification' stage in the direct discrimination framework.
We focus on the most likely candidate for direct discrimination in the algorithmic context.
arXiv Detail & Related papers (2024-04-22T10:06:17Z) - Fairness and Bias in Algorithmic Hiring: a Multidisciplinary Survey [43.463169774689646]
This survey caters to practitioners and researchers with a balanced and integrated coverage of systems, biases, measures, mitigation strategies, datasets, and legal aspects of algorithmic hiring and fairness.
Our work supports a contextualized understanding and governance of this technology by highlighting current opportunities and limitations, providing recommendations for future work to ensure shared benefits for all stakeholders.
arXiv Detail & Related papers (2023-09-25T08:04:18Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Tackling Algorithmic Disability Discrimination in the Hiring Process: An
Ethical, Legal and Technical Analysis [2.294014185517203]
We discuss concerns and opportunities raised by AI-driven hiring in relation to disability discrimination.
We establish some starting points and design a roadmap for ethicists, lawmakers, advocates as well as AI practitioners alike.
arXiv Detail & Related papers (2022-06-13T13:32:37Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z) - Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI [10.281644134255576]
Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
arXiv Detail & Related papers (2020-05-12T16:30:12Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.