Unlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms
- URL: http://arxiv.org/abs/2404.14050v1
- Date: Mon, 22 Apr 2024 10:06:17 GMT
- Title: Unlawful Proxy Discrimination: A Framework for Challenging Inherently Discriminatory Algorithms
- Authors: Hilde Weerts, Aislinn Kelly-Lyth, Reuben Binns, Jeremias Adams-Prassl,
- Abstract summary: EU legal concept of direct discrimination may apply to various algorithmic decision-making contexts.
Unlike indirect discrimination, there is generally no 'objective justification' stage in the direct discrimination framework.
We focus on the most likely candidate for direct discrimination in the algorithmic context.
- Score: 4.1221687771754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emerging scholarship suggests that the EU legal concept of direct discrimination - where a person is given different treatment on grounds of a protected characteristic - may apply to various algorithmic decision-making contexts. This has important implications: unlike indirect discrimination, there is generally no 'objective justification' stage in the direct discrimination framework, which means that the deployment of directly discriminatory algorithms will usually be unlawful per se. In this paper, we focus on the most likely candidate for direct discrimination in the algorithmic context, termed inherent direct discrimination, where a proxy is inextricably linked to a protected characteristic. We draw on computer science literature to suggest that, in the algorithmic context, 'treatment on the grounds of' needs to be understood in terms of two steps: proxy capacity and proxy use. Only where both elements can be made out can direct discrimination be said to be `on grounds of' a protected characteristic. We analyse the legal conditions of our proposed proxy capacity and proxy use tests. Based on this analysis, we discuss technical approaches and metrics that could be developed or applied to identify inherent direct discrimination in algorithmic decision-making.
Related papers
- Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - DELTA: Pre-train a Discriminative Encoder for Legal Case Retrieval via Structural Word Alignment [55.91429725404988]
We introduce DELTA, a discriminative model designed for legal case retrieval.
We leverage shallow decoders to create information bottlenecks, aiming to enhance the representation ability.
Our approach can outperform existing state-of-the-art methods in legal case retrieval.
arXiv Detail & Related papers (2024-03-27T10:40:14Z) - Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - Compatibility of Fairness Metrics with EU Non-Discrimination Laws:
Demographic Parity & Conditional Demographic Disparity [3.5607241839298878]
Empirical evidence suggests that algorithmic decisions driven by Machine Learning (ML) techniques threaten to discriminate against legally protected groups or create new sources of unfairness.
This work aims at assessing up to what point we can assure legal fairness through fairness metrics and under fairness constraints.
Our experiments and analysis suggest that AI-assisted decision-making can be fair from a legal perspective depending on the case at hand and the legal justification.
arXiv Detail & Related papers (2023-06-14T09:38:05Z) - Multi-dimensional discrimination in Law and Machine Learning -- A
comparative overview [14.650860450187793]
Domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models.
In reality, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic.
Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain.
arXiv Detail & Related papers (2023-02-12T20:41:58Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - Marrying Fairness and Explainability in Supervised Learning [0.0]
We formalize direct discrimination as a direct causal effect of the protected attributes on the decisions.
We find that state-of-the-art fair learning methods can induce discrimination via association or reverse discrimination.
We propose to nullify the influence of the protected attribute on the output of the system, while preserving the influence of remaining features.
arXiv Detail & Related papers (2022-04-06T17:26:58Z) - Affirmative Algorithms: The Legal Grounds for Fairness as Awareness [0.0]
We discuss how such approaches will likely be deemed "algorithmic affirmative action"
We argue that the government-contracting cases offer an alternative grounding for algorithmic fairness.
We call for more research at the intersection of algorithmic fairness and causal inference to ensure that bias mitigation is tailored to specific causes and mechanisms of bias.
arXiv Detail & Related papers (2020-12-18T22:53:20Z) - Towards Robust Fine-grained Recognition by Maximal Separation of
Discriminative Features [72.72840552588134]
We identify the proximity of the latent representations of different classes in fine-grained recognition networks as a key factor to the success of adversarial attacks.
We introduce an attention-based regularization mechanism that maximally separates the discriminative latent features of different classes.
arXiv Detail & Related papers (2020-06-10T18:34:45Z) - Why Fairness Cannot Be Automated: Bridging the Gap Between EU
Non-Discrimination Law and AI [10.281644134255576]
Article identifies a critical incompatibility between European notions of discrimination and existing statistical measures of fairness.
We show how the legal protection offered by non-discrimination law is challenged when AI, not humans, discriminate.
We propose "conditional demographic disparity" (CDD) as a standard baseline statistical measurement.
arXiv Detail & Related papers (2020-05-12T16:30:12Z) - Discrimination of POVMs with rank-one effects [62.997667081978825]
This work provides an insight into the problem of discrimination of positive operator valued measures with rank-one effects.
We compare two possible discrimination schemes: the parallel and adaptive ones.
We provide an explicit algorithm which allows us to find this adaptive scheme.
arXiv Detail & Related papers (2020-02-13T11:34:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.