Multi-dimensional discrimination in Law and Machine Learning -- A
comparative overview
- URL: http://arxiv.org/abs/2302.05995v1
- Date: Sun, 12 Feb 2023 20:41:58 GMT
- Title: Multi-dimensional discrimination in Law and Machine Learning -- A
comparative overview
- Authors: Arjun Roy, Jan Horstmann, Eirini Ntoutsi
- Abstract summary: Domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models.
In reality, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic.
Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain.
- Score: 14.650860450187793
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI-driven decision-making can lead to discrimination against certain
individuals or social groups based on protected characteristics/attributes such
as race, gender, or age. The domain of fairness-aware machine learning focuses
on methods and algorithms for understanding, mitigating, and accounting for
bias in AI/ML models. Still, thus far, the vast majority of the proposed
methods assess fairness based on a single protected attribute, e.g. only gender
or race. In reality, though, human identities are multi-dimensional, and
discrimination can occur based on more than one protected characteristic,
leading to the so-called ``multi-dimensional discrimination'' or
``multi-dimensional fairness'' problem. While well-elaborated in legal
literature, the multi-dimensionality of discrimination is less explored in the
machine learning community. Recent approaches in this direction mainly follow
the so-called intersectional fairness definition from the legal domain, whereas
other notions like additive and sequential discrimination are less studied or
not considered thus far. In this work, we overview the different definitions of
multi-dimensional discrimination/fairness in the legal domain as well as how
they have been transferred/ operationalized (if) in the fairness-aware machine
learning domain. By juxtaposing these two domains, we draw the connections,
identify the limitations, and point out open research directions.
Related papers
- Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - Unsupervised Domain Adaptation on Person Re-Identification via
Dual-level Asymmetric Mutual Learning [108.86940401125649]
This paper proposes a Dual-level Asymmetric Mutual Learning method (DAML) to learn discriminative representations from a broader knowledge scope with diverse embedding spaces.
The knowledge transfer between two networks is based on an asymmetric mutual learning manner.
Experiments in Market-1501, CUHK-SYSU, and MSMT17 public datasets verified the superiority of DAML over state-of-the-arts.
arXiv Detail & Related papers (2023-01-29T12:36:17Z) - Developing a Philosophical Framework for Fair Machine Learning: Lessons
From The Case of Algorithmic Collusion [0.0]
As machine learning algorithms are applied in new contexts the harms and injustices that result are qualitatively different.
The existing research paradigm in machine learning which develops metrics and definitions of fairness cannot account for these qualitatively different types of injustice.
I propose an ethical framework for researchers and practitioners in machine learning seeking to develop and apply fairness metrics.
arXiv Detail & Related papers (2022-07-05T16:21:56Z) - Reusing the Task-specific Classifier as a Discriminator:
Discriminator-free Adversarial Domain Adaptation [55.27563366506407]
We introduce a discriminator-free adversarial learning network (DALN) for unsupervised domain adaptation (UDA)
DALN achieves explicit domain alignment and category distinguishment through a unified objective.
DALN compares favorably against the existing state-of-the-art (SOTA) methods on a variety of public datasets.
arXiv Detail & Related papers (2022-04-08T04:40:18Z) - The zoo of Fairness metrics in Machine Learning [62.997667081978825]
In recent years, the problem of addressing fairness in Machine Learning (ML) and automatic decision-making has attracted a lot of attention.
A plethora of different definitions of fairness in ML have been proposed, that consider different notions of what is a "fair decision" in situations impacting individuals in the population.
In this work, we try to make some order out of this zoo of definitions.
arXiv Detail & Related papers (2021-06-01T13:19:30Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Distributive Justice and Fairness Metrics in Automated Decision-making:
How Much Overlap Is There? [0.0]
We show that metrics implementing equality of opportunity only apply when resource allocations are based on deservingness, but fail when allocations should reflect concerns about egalitarianism, sufficiency, and priority.
We argue that by cleanly distinguishing between prediction tasks and decision tasks, research on fair machine learning could take better advantage of the rich literature on distributive justice.
arXiv Detail & Related papers (2021-05-04T12:09:26Z) - Characterizing Intersectional Group Fairness with Worst-Case Comparisons [0.0]
We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
arXiv Detail & Related papers (2021-01-05T17:44:33Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Bias and Discrimination in AI: a cross-disciplinary perspective [5.190307793476366]
We show that finding solutions to bias and discrimination in AI requires robust cross-disciplinary collaborations.
We survey relevant literature about bias and discrimination in AI from an interdisciplinary perspective that embeds technical, legal, social and ethical dimensions.
arXiv Detail & Related papers (2020-08-11T10:02:04Z) - A Normative approach to Attest Digital Discrimination [6.372554934045607]
Examples include low-income neighbourhood's targeted with high-interest loans or low credit scores, and women being undervalued by 21% in online marketing.
We use norms as an abstraction to represent different situations that may lead to digital discrimination.
In particular, we formalise non-discrimination norms in the context of ML systems and propose an algorithm to check whether ML systems violate these norms.
arXiv Detail & Related papers (2020-07-14T15:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.