Fair learning with Wasserstein barycenters for non-decomposable
performance measures
- URL: http://arxiv.org/abs/2209.00427v1
- Date: Thu, 1 Sep 2022 13:06:43 GMT
- Title: Fair learning with Wasserstein barycenters for non-decomposable
performance measures
- Authors: Solenne Gaucher and Nicolas Schreuder and Evgenii Chzhen
- Abstract summary: We show that maximizing accuracy under the demographic parity constraint is equivalent to solving a corresponding regression problem.
We extend this result to linear-fractional classification measures (e.g., $rm F$-score, AM measure, balanced accuracy, etc.)
- Score: 8.508198765617198
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This work provides several fundamental characterizations of the optimal
classification function under the demographic parity constraint. In the
awareness framework, akin to the classical unconstrained classification case,
we show that maximizing accuracy under this fairness constraint is equivalent
to solving a corresponding regression problem followed by thresholding at level
$1/2$. We extend this result to linear-fractional classification measures
(e.g., ${\rm F}$-score, AM measure, balanced accuracy, etc.), highlighting the
fundamental role played by the regression problem in this framework. Our
results leverage recently developed connection between the demographic parity
constraint and the multi-marginal optimal transport formulation. Informally,
our result shows that the transition between the unconstrained problems and the
fair one is achieved by replacing the conditional expectation of the label by
the solution of the fair regression problem. Finally, leveraging our analysis,
we demonstrate an equivalence between the awareness and the unawareness setups
in the case of two sensitive groups.
Related papers
- Demographic parity in regression and classification within the unawareness framework [8.057006406834466]
We characterize the optimal fair regression function when minimizing the quadratic loss.
We also study the connection between optimal fair cost-sensitive classification, and optimal fair regression.
arXiv Detail & Related papers (2024-09-04T06:43:17Z) - Covariance-corrected Whitening Alleviates Network Degeneration on Imbalanced Classification [6.197116272789107]
Class imbalance is a critical issue in image classification that significantly affects the performance of deep recognition models.
We propose a novel framework called Whitening-Net to mitigate the degenerate solutions.
In scenarios with extreme class imbalance, the batch covariance statistic exhibits significant fluctuations, impeding the convergence of the whitening operation.
arXiv Detail & Related papers (2024-08-30T10:49:33Z) - Fairness-Aware Meta-Learning via Nash Bargaining [63.44846095241147]
We introduce a two-stage meta-learning framework to address issues of group-level fairness in machine learning.
The first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model.
We show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.
arXiv Detail & Related papers (2024-06-11T07:34:15Z) - Fairness-Accuracy Trade-Offs: A Causal Perspective [58.06306331390586]
We analyze the tension between fairness and accuracy from a causal lens for the first time.
We show that enforcing a causal constraint often reduces the disparity between demographic groups.
We introduce a new neural approach for causally-constrained fair learning.
arXiv Detail & Related papers (2024-05-24T11:19:52Z) - Balanced Classification: A Unified Framework for Long-Tailed Object
Detection [74.94216414011326]
Conventional detectors suffer from performance degradation when dealing with long-tailed data due to a classification bias towards the majority head categories.
We introduce a unified framework called BAlanced CLassification (BACL), which enables adaptive rectification of inequalities caused by disparities in category distribution.
BACL consistently achieves performance improvements across various datasets with different backbones and architectures.
arXiv Detail & Related papers (2023-08-04T09:11:07Z) - Mean Parity Fair Regression in RKHS [43.98593032593897]
We study the fair regression problem under the notion of Mean Parity (MP) fairness.
We address this problem by leveraging reproducing kernel Hilbert space (RKHS)
We derive a corresponding regression function that can be implemented efficiently and provides interpretable tradeoffs.
arXiv Detail & Related papers (2023-02-21T02:44:50Z) - Calibrating Segmentation Networks with Margin-based Label Smoothing [19.669173092632]
We provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses.
These losses could be viewed as approximations of a linear penalty imposing equality constraints on logit distances.
We propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances.
arXiv Detail & Related papers (2022-09-09T20:21:03Z) - An Intermediate-level Attack Framework on The Basis of Linear Regression [89.85593878754571]
This paper substantially extends our work published at ECCV, in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples.
We advocate to establish a direct linear mapping from the intermediate-level discrepancies (between adversarial features and benign features) to classification prediction loss of the adversarial example.
We show that 1) a variety of linear regression models can all be considered in order to establish the mapping, 2) the magnitude of the finally obtained intermediate-level discrepancy is linearly correlated with adversarial transferability, and 3) further boost of the performance can be achieved by performing multiple runs of the baseline attack with
arXiv Detail & Related papers (2022-03-21T03:54:53Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.