Everything is Relative: Understanding Fairness with Optimal Transport
- URL: http://arxiv.org/abs/2102.10349v1
- Date: Sat, 20 Feb 2021 13:57:53 GMT
- Title: Everything is Relative: Understanding Fairness with Optimal Transport
- Authors: Kweku Kwegyir-Aggrey, Rebecca Santorella, Sarah M. Brown
- Abstract summary: We present an optimal transport-based approach to fairness that offers an interpretable and quantifiable exploration of bias and its structure.
Our framework is able to recover well known examples of algorithmic discrimination, detect unfairness when other metrics fail, and explore recourse opportunities.
- Score: 1.160208922584163
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To study discrimination in automated decision-making systems, scholars have
proposed several definitions of fairness, each expressing a different fair
ideal. These definitions require practitioners to make complex decisions
regarding which notion to employ and are often difficult to use in practice
since they make a binary judgement a system is fair or unfair instead of
explaining the structure of the detected unfairness. We present an optimal
transport-based approach to fairness that offers an interpretable and
quantifiable exploration of bias and its structure by comparing a pair of
outcomes to one another. In this work, we use the optimal transport map to
examine individual, subgroup, and group fairness. Our framework is able to
recover well known examples of algorithmic discrimination, detect unfairness
when other metrics fail, and explore recourse opportunities.
Related papers
- Fairness Explainability using Optimal Transport with Applications in
Image Classification [0.46040036610482665]
We propose a comprehensive approach to uncover the causes of discrimination in Machine Learning applications.
We leverage Wasserstein barycenters to achieve fair predictions and introduce an extension to pinpoint bias-associated regions.
This allows us to derive a cohesive system which uses the enforced fairness to measure each features influence emphon the bias.
arXiv Detail & Related papers (2023-08-22T00:10:23Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - A Systematic Approach to Group Fairness in Automated Decision Making [0.0]
The goal of this paper is to provide data scientists with an accessible introduction to group fairness metrics.
We will do this by considering in which sense socio-demographic groups are compared for making a statement on fairness.
arXiv Detail & Related papers (2021-09-09T12:47:15Z) - Fairness Through Counterfactual Utilities [0.0]
Group fairness definitions such as Demographic Parity and Equal Opportunity make assumptions about the underlying decision-problem that restrict them to classification problems.
We provide a generalized set of group fairness definitions that unambiguously extend to all machine learning environments.
arXiv Detail & Related papers (2021-08-11T16:51:27Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Metric-Free Individual Fairness with Cooperative Contextual Bandits [17.985752744098267]
Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
Individual fairness remains understudied due to its reliance on problem-specific similarity metrics.
We propose a metric-free individual fairness and a cooperative contextual bandits algorithm.
arXiv Detail & Related papers (2020-11-13T03:10:35Z) - Fairness in Semi-supervised Learning: Unlabeled Data Help to Reduce
Discrimination [53.3082498402884]
A growing specter in the rise of machine learning is whether the decisions made by machine learning models are fair.
We present a framework of fair semi-supervised learning in the pre-processing phase, including pseudo labeling to predict labels for unlabeled data.
A theoretical decomposition analysis of bias, variance and noise highlights the different sources of discrimination and the impact they have on fairness in semi-supervised learning.
arXiv Detail & Related papers (2020-09-25T05:48:56Z) - Machine learning fairness notions: Bridging the gap with real-world
applications [4.157415305926584]
Fairness emerged as an important requirement to guarantee that Machine Learning predictive systems do not discriminate against specific individuals or entire sub-populations.
This paper is a survey that illustrates the subtleties between fairness notions through a large number of examples and scenarios.
arXiv Detail & Related papers (2020-06-30T13:01:06Z) - Two Simple Ways to Learn Individual Fairness Metrics from Data [47.6390279192406]
Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
arXiv Detail & Related papers (2020-06-19T23:47:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.