Leveling Down in Computer Vision: Pareto Inefficiencies in Fair Deep
Classifiers
- URL: http://arxiv.org/abs/2203.04913v1
- Date: Wed, 9 Mar 2022 17:48:33 GMT
- Title: Leveling Down in Computer Vision: Pareto Inefficiencies in Fair Deep
Classifiers
- Authors: Dominik Zietlow, Michael Lohaus, Guha Balakrishnan, Matth\"aus
Kleindessner, Francesco Locatello, Bernhard Sch\"olkopf, Chris Russell
- Abstract summary: We find that applying existing fairness approaches to computer vision improve fairness by degrading the performance of classifiers across all groups.
We propose an adaptive augmentation strategy that, uniquely, improves performance for the disadvantaged groups.
- Score: 38.33762469290228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithmic fairness is frequently motivated in terms of a trade-off in which
overall performance is decreased so as to improve performance on disadvantaged
groups where the algorithm would otherwise be less accurate. Contrary to this,
we find that applying existing fairness approaches to computer vision improve
fairness by degrading the performance of classifiers across all groups (with
increased degradation on the best performing groups).
Extending the bias-variance decomposition for classification to fairness, we
theoretically explain why the majority of fairness classifiers designed for low
capacity models should not be used in settings involving high-capacity models,
a scenario common to computer vision. We corroborate this analysis with
extensive experimental support that shows that many of the fairness heuristics
used in computer vision also degrade performance on the most disadvantaged
groups. Building on these insights, we propose an adaptive augmentation
strategy that, uniquely, of all methods tested, improves performance for the
disadvantaged groups.
Related papers
- How does promoting the minority fraction affect generalization? A theoretical study of the one-hidden-layer neural network on group imbalance [64.1656365676171]
Group imbalance has been a known problem in empirical risk minimization.
This paper quantifies the impact of individual groups on the sample complexity, the convergence rate, and the average and group-level testing performance.
arXiv Detail & Related papers (2024-03-12T04:38:05Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization [61.39201891894024]
Group distributionally robust optimization (group DRO) can minimize the worst-case loss over pre-defined groups.
We reformulate the group DRO framework by proposing Q-Diversity.
Characterized by an interactive training mode, Q-Diversity relaxes the group identification from annotation into direct parameterization.
arXiv Detail & Related papers (2023-05-20T07:02:27Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Re-weighting Based Group Fairness Regularization via Classwise Robust
Optimization [30.089819400033985]
We propose a principled method, dubbed as ours, which unifies the two learning schemes by incorporating a well-justified group fairness metric into the training objective.
We develop an iterative optimization algorithm that minimizes the resulting objective by automatically producing the correct re-weights for each group.
Our experiments show that FairDRO is scalable and easily adaptable to diverse applications.
arXiv Detail & Related papers (2023-03-01T12:00:37Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - Improved Approximation for Fair Correlation Clustering [4.629694186457133]
Correlation clustering is a ubiquitous paradigm in unsupervised machine learning where addressing unfairness is a major challenge.
Motivated by this, we study Fair Correlation Clustering where the data points may belong to different protected groups.
Our paper significantly generalizes and improves on the quality guarantees of previous work of Ahmadi et al. and Ahmadian et al.
arXiv Detail & Related papers (2022-06-09T03:07:57Z) - Improving the Fairness of Chest X-ray Classifiers [19.908277166053185]
We ask whether striving to achieve zero disparities in predictive performance (i.e. group fairness) is the appropriate fairness definition in the clinical setting.
We find, consistent with prior work on non-clinical data, that methods which strive to achieve better worst-group performance do not outperform simple data balancing.
arXiv Detail & Related papers (2022-03-23T17:56:58Z) - Repairing Regressors for Fair Binary Classification at Any Decision
Threshold [8.322348511450366]
We show that we can increase fair performance across all thresholds at once.
We introduce a formal measure of Distributional Parity, which captures the degree of similarity in the distributions of classifications for different protected groups.
Our main result is to put forward a novel post-processing algorithm based on optimal transport, which provably maximizes Distributional Parity.
arXiv Detail & Related papers (2022-03-14T20:53:35Z) - Unsupervised Learning of Debiased Representations with Pseudo-Attributes [85.5691102676175]
We propose a simple but effective debiasing technique in an unsupervised manner.
We perform clustering on the feature embedding space and identify pseudoattributes by taking advantage of the clustering results.
We then employ a novel cluster-based reweighting scheme for learning debiased representation.
arXiv Detail & Related papers (2021-08-06T05:20:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.