Minimax Group Fairness: Algorithms and Experiments
- URL: http://arxiv.org/abs/2011.03108v2
- Date: Mon, 8 Mar 2021 01:19:11 GMT
- Title: Minimax Group Fairness: Algorithms and Experiments
- Authors: Emily Diana, Wesley Gill, Michael Kearns, Krishnaram Kenthapadi, and
Aaron Roth
- Abstract summary: We provide provably convergent oracle-efficient learning algorithms for minimax group fairness.
Our algorithms apply to both regression and classification settings.
We show empirical cases in which minimax fairness is strictly and strongly preferable to equal outcome notions.
- Score: 18.561824632836405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a recently introduced framework in which fairness is measured by
worst-case outcomes across groups, rather than by the more standard differences
between group outcomes. In this framework we provide provably convergent
oracle-efficient learning algorithms (or equivalently, reductions to non-fair
learning) for minimax group fairness. Here the goal is that of minimizing the
maximum loss across all groups, rather than equalizing group losses. Our
algorithms apply to both regression and classification settings and support
both overall error and false positive or false negative rates as the fairness
measure of interest. They also support relaxations of the fairness constraints,
thus permitting study of the tradeoff between overall accuracy and minimax
fairness. We compare the experimental behavior and performance of our
algorithms across a variety of fairness-sensitive data sets and show empirical
cases in which minimax fairness is strictly and strongly preferable to equal
outcome notions.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Minimax Optimal Fair Classification with Bounded Demographic Disparity [28.936244976415484]
This paper explores the statistical foundations of fair binary classification with two protected groups.
We show that using a finite sample incurs additional costs due to the need to estimate group-specific acceptance thresholds.
We propose FairBayes-DDP+, a group-wise thresholding method with an offset that we show attains the minimax lower bound.
arXiv Detail & Related papers (2024-03-27T02:59:04Z) - How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance [64.1656365676171]
Group imbalance has been a known problem in empirical risk minimization.
This paper quantifies the impact of individual groups on the sample complexity, the convergence rate, and the average and group-level testing performance.
arXiv Detail & Related papers (2024-03-12T04:38:05Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Fair Without Leveling Down: A New Intersectional Fairness Definition [1.0958014189747356]
We propose a new definition called the $alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups.
We benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline.
arXiv Detail & Related papers (2023-05-21T16:15:12Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Lexicographically Fair Learning: Algorithms and Generalization [13.023987750303856]
Lexifairness asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized.
We derive oracle-efficient algorithms for finding approximately lexifair solutions in a very general setting.
arXiv Detail & Related papers (2021-02-16T21:15:42Z) - Metric-Free Individual Fairness with Cooperative Contextual Bandits [17.985752744098267]
Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
Individual fairness remains understudied due to its reliance on problem-specific similarity metrics.
We propose a metric-free individual fairness and a cooperative contextual bandits algorithm.
arXiv Detail & Related papers (2020-11-13T03:10:35Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.