Lexicographically Fair Learning: Algorithms and Generalization
- URL: http://arxiv.org/abs/2102.08454v1
- Date: Tue, 16 Feb 2021 21:15:42 GMT
- Title: Lexicographically Fair Learning: Algorithms and Generalization
- Authors: Emily Diana, Wesley Gill, Ira Globus-Harris, Michael Kearns, Aaron
Roth and Saeed Sharifi-Malvajerdi
- Abstract summary: Lexifairness asks that amongst all minimax fair solutions, the error of the group with the second highest error should be minimized.
We derive oracle-efficient algorithms for finding approximately lexifair solutions in a very general setting.
- Score: 13.023987750303856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We extend the notion of minimax fairness in supervised learning problems to
its natural conclusion: lexicographic minimax fairness (or lexifairness for
short). Informally, given a collection of demographic groups of interest,
minimax fairness asks that the error of the group with the highest error be
minimized. Lexifairness goes further and asks that amongst all minimax fair
solutions, the error of the group with the second highest error should be
minimized, and amongst all of those solutions, the error of the group with the
third highest error should be minimized, and so on. Despite its naturalness,
correctly defining lexifairness is considerably more subtle than minimax
fairness, because of inherent sensitivity to approximation error. We give a
notion of approximate lexifairness that avoids this issue, and then derive
oracle-efficient algorithms for finding approximately lexifair solutions in a
very general setting. When the underlying empirical risk minimization problem
absent fairness constraints is convex (as it is, for example, with linear and
logistic regression), our algorithms are provably efficient even in the worst
case. Finally, we show generalization bounds -- approximate lexifairness on the
training sample implies approximate lexifairness on the true distribution with
high probability. Our ability to prove generalization bounds depends on our
choosing definitions that avoid the instability of naive definitions.
Related papers
- A General Framework for Constraint-based Causal Learning [3.031375888004876]
This provides a general framework to obtain correctness conditions for causal learning.
We show that the sparsest Markov representation condition is the weakest correctness condition resulting from existing notions of minimality for maximal ancestral graphs and directed acyclic graphs.
arXiv Detail & Related papers (2024-08-14T14:16:02Z) - Minimax Optimal Fair Classification with Bounded Demographic Disparity [28.936244976415484]
This paper explores the statistical foundations of fair binary classification with two protected groups.
We show that using a finite sample incurs additional costs due to the need to estimate group-specific acceptance thresholds.
We propose FairBayes-DDP+, a group-wise thresholding method with an offset that we show attains the minimax lower bound.
arXiv Detail & Related papers (2024-03-27T02:59:04Z) - Optimal Multi-Distribution Learning [88.3008613028333]
Multi-distribution learning seeks to learn a shared model that minimizes the worst-case risk across $k$ distinct data distributions.
We propose a novel algorithm that yields an varepsilon-optimal randomized hypothesis with a sample complexity on the order of (d+k)/varepsilon2.
arXiv Detail & Related papers (2023-12-08T16:06:29Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Loss Balancing for Fair Supervised Learning [20.13250413610897]
Supervised learning models have been used in various domains such as lending, college admission, face recognition, natural language processing, etc.
Various notions have been proposed to address the unfairness predictor on the learning process (EL)
arXiv Detail & Related papers (2023-11-07T04:36:13Z) - Correcting Underrepresentation and Intersectional Bias for Classification [49.1574468325115]
We consider the problem of learning from data corrupted by underrepresentation bias.
We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates.
We show that our algorithm permits efficient learning for model classes of finite VC dimension.
arXiv Detail & Related papers (2023-06-19T18:25:44Z) - On the Variance, Admissibility, and Stability of Empirical Risk
Minimization [80.26309576810844]
Empirical Risk Minimization (ERM) with squared loss may attain minimax suboptimal error rates.
We show that under mild assumptions, the suboptimality of ERM must be due to large bias rather than variance.
We also show that our estimates imply stability of ERM, complementing the main result of Caponnetto and Rakhlin (2006) for non-Donsker classes.
arXiv Detail & Related papers (2023-05-29T15:25:48Z) - Maxmin-Fair Ranking: Individual Fairness under Group-Fairness
Constraints [11.3077234652777]
We study a novel problem of fairness in ranking aimed at minimizing the amount of individual unfairness introduced when enforcing group-fairness constraints.
Our proposal is rooted in the distributional maxminmin theory which uses randomization to maximize the expected satisfaction of the worst-off individuals.
arXiv Detail & Related papers (2021-06-16T09:27:12Z) - Minimax Group Fairness: Algorithms and Experiments [18.561824632836405]
We provide provably convergent oracle-efficient learning algorithms for minimax group fairness.
Our algorithms apply to both regression and classification settings.
We show empirical cases in which minimax fairness is strictly and strongly preferable to equal outcome notions.
arXiv Detail & Related papers (2020-11-05T21:42:56Z) - Learning Minimax Estimators via Online Learning [55.92459567732491]
We consider the problem of designing minimax estimators for estimating parameters of a probability distribution.
We construct an algorithm for finding a mixed-case Nash equilibrium.
arXiv Detail & Related papers (2020-06-19T22:49:42Z) - Bias no more: high-probability data-dependent regret bounds for
adversarial bandits and MDPs [48.44657553192801]
We develop a new approach to obtaining high probability regret bounds for online learning with bandit feedback against an adaptive adversary.
Our approach relies on a simple increasing learning rate schedule, together with the help of logarithmically homogeneous self-concordant barriers and a strengthened Freedman's inequality.
arXiv Detail & Related papers (2020-06-14T22:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.