Fair Mixup: Fairness via Interpolation
- URL: http://arxiv.org/abs/2103.06503v1
- Date: Thu, 11 Mar 2021 06:57:26 GMT
- Title: Fair Mixup: Fairness via Interpolation
- Authors: Ching-Yao Chuang, Youssef Mroueh
- Abstract summary: We propose fair mixup, a new data augmentation strategy for imposing the fairness constraint.
We show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups.
We empirically show that it ensures a better generalization for both accuracy and fairness measurement in benchmarks.
- Score: 28.508444261249423
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training classifiers under fairness constraints such as group fairness,
regularizes the disparities of predictions between the groups. Nevertheless,
even though the constraints are satisfied during training, they might not
generalize at evaluation time. To improve the generalizability of fair
classifiers, we propose fair mixup, a new data augmentation strategy for
imposing the fairness constraint. In particular, we show that fairness can be
achieved by regularizing the models on paths of interpolated samples between
the groups. We use mixup, a powerful data augmentation strategy to generate
these interpolates. We analyze fair mixup and empirically show that it ensures
a better generalization for both accuracy and fairness measurement in tabular,
vision, and language benchmarks.
Related papers
- Towards Cohesion-Fairness Harmony: Contrastive Regularization in
Individual Fair Graph Clustering [5.255750357176021]
iFairNMTF is an individual Fairness Nonnegative Matrix Tri-Factorization model with contrastive fairness regularization.
Our model allows for customizable accuracy-fairness trade-offs, thereby enhancing user autonomy.
arXiv Detail & Related papers (2024-02-16T15:25:56Z) - Data Augmentation via Subgroup Mixup for Improving Fairness [31.296907816698987]
We propose data augmentation via pairwise mixup across subgroups to improve group fairness.
Inspired by the successes of mixup for improving classification performance, we develop a pairwise mixup scheme to augment training data.
arXiv Detail & Related papers (2023-09-13T17:32:21Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Repairing Regressors for Fair Binary Classification at Any Decision
Threshold [8.322348511450366]
We show that we can increase fair performance across all thresholds at once.
We introduce a formal measure of Distributional Parity, which captures the degree of similarity in the distributions of classifications for different protected groups.
Our main result is to put forward a novel post-processing algorithm based on optimal transport, which provably maximizes Distributional Parity.
arXiv Detail & Related papers (2022-03-14T20:53:35Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - FACT: A Diagnostic for Group Fairness Trade-offs [23.358566041117083]
Group fairness is a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes.
We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness.
arXiv Detail & Related papers (2020-04-07T14:15:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.