Protected Grounds and the System of Non-Discrimination Law in the Context of Algorithmic Decision-Making and Artificial Intelligence
- URL: http://arxiv.org/abs/2509.08837v1
- Date: Mon, 01 Sep 2025 15:21:12 GMT
- Title: Protected Grounds and the System of Non-Discrimination Law in the Context of Algorithmic Decision-Making and Artificial Intelligence
- Authors: Janneke Gerards, Frederik Zuiderveen Borgesius,
- Abstract summary: This paper explores which system of non-discrimination law can best be applied to algorithmic decision-making.<n>The paper analyses the current loopholes in the protection offered by non-discrimination law and explores the best way for lawmakers to approach algorithmic differentiation.
- Score: 0.1915265522996079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic decision-making and similar types of artificial intelligence (AI) may lead to improvements in all sectors of society, but can also have discriminatory effects. While current non-discrimination law offers people some protection, algorithmic decision-making presents the law with several challenges. For instance, algorithms can generate new categories of people based on seemingly innocuous characteristics, such as web browser preference or apartment number, or more complicated categories combining many data points. Such new types of differentiation could evade non-discrimination law, as browser type and house number are not protected characteristics, but such differentiation could still be unfair, for instance if it reinforces social inequality. This paper explores which system of non-discrimination law can best be applied to algorithmic decision-making, considering that algorithms can differentiate on the basis of characteristics that do not correlate with protected grounds of discrimination such as ethnicity or gender. The paper analyses the current loopholes in the protection offered by non-discrimination law and explores the best way for lawmakers to approach algorithmic differentiation. While we focus on Europe, the conceptual and theoretical focus of the paper can make it useful for scholars and policymakers from other regions too, as they encounter similar problems with algorithmic decision-making.
Related papers
- Strengthening legal protection against discrimination by algorithms and artificial intelligence [1.0406659081400351]
The paper evaluates current legal protection in Europe against discriminatory algorithmic decisions.<n>The paper argues for sector-specific - rather than general - rules, and outlines an approach to regulate algorithmic decision-making.
arXiv Detail & Related papers (2025-10-03T09:54:03Z) - Partial Identification Approach to Counterfactual Fairness Assessment [50.88100567472179]
We introduce a Bayesian approach to bound unknown counterfactual fairness measures with high confidence.<n>Our results reveal a positive (spurious) effect on the COMPAS score when changing race to African-American (from all others) and a negative (direct causal) effect when transitioning from young to old age.
arXiv Detail & Related papers (2025-09-30T18:35:08Z) - Price discrimination, algorithmic decision-making, and European non-discrimination law [0.2262632497140704]
algorithmic decision-making can have discriminatory effects.<n>Online price differentiation could lead to indirect discrimination.<n>Paper shows, however, that non-discrimination law has flaws when applied to algorithmic decision-making.
arXiv Detail & Related papers (2025-09-28T12:41:48Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Unlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms [4.1221687771754]
EU legal concept of direct discrimination may apply to various algorithmic decision-making contexts.
Unlike indirect discrimination, there is generally no 'objective justification' stage in the direct discrimination framework.
We focus on the most likely candidate for direct discrimination in the algorithmic context.
arXiv Detail & Related papers (2024-04-22T10:06:17Z) - Fairness and Bias in Algorithmic Hiring: a Multidisciplinary Survey [43.463169774689646]
This survey caters to practitioners and researchers with a balanced and integrated coverage of systems, biases, measures, mitigation strategies, datasets, and legal aspects of algorithmic hiring and fairness.<n>Our work supports a contextualized understanding and governance of this technology by highlighting current opportunities and limitations, providing recommendations for future work to ensure shared benefits for all stakeholders.
arXiv Detail & Related papers (2023-09-25T08:04:18Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Designing Equitable Algorithms [1.9006392177894293]
Predictive algorithms are now used to help distribute a large share of our society's resources and sanctions.
These algorithms can improve the efficiency and equity of decision-making.
But they could entrench and exacerbate disparities, particularly along racial, ethnic, and gender lines.
arXiv Detail & Related papers (2023-02-17T22:00:44Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Multi-dimensional discrimination in Law and Machine Learning -- A
comparative overview [14.650860450187793]
Domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models.
In reality, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic.
Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain.
arXiv Detail & Related papers (2023-02-12T20:41:58Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Affirmative Algorithms: The Legal Grounds for Fairness as Awareness [0.0]
We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
arXiv Detail & Related papers (2020-12-18T22:53:20Z) - Towards a Flexible Framework for Algorithmic Fairness [0.8379286663107844]
In recent years, many different definitions for ensuring non-discrimination in algorithmic decision systems have been put forward.
We present an algorithm that harnesses optimal transport to provide a flexible framework to interpolate between different fairness definitions.
arXiv Detail & Related papers (2020-10-15T16:06:53Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.