FIFA: Making Fairness More Generalizable in Classifiers Trained on
Imbalanced Data
- URL: http://arxiv.org/abs/2206.02792v1
- Date: Mon, 6 Jun 2022 04:39:25 GMT
- Title: FIFA: Making Fairness More Generalizable in Classifiers Trained on
Imbalanced Data
- Authors: Zhun Deng, Jiayao Zhang, Linjun Zhang, Ting Ye, Yates Coley, Weijie J.
Su, James Zou
- Abstract summary: We propose a theoretically-principled, yet Flexible approach that is Imbalance-Fairness-Aware (FIFA)
FIFA encourages both classification and fairness generalization and can be flexibly combined with many existing fair learning methods with logits-based losses.
We demonstrate the power of FIFA by combining it with a popular fair classification algorithm, and the resulting algorithm achieves significantly better fairness generalization on several real-world datasets.
- Score: 34.70704786008873
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic fairness plays an important role in machine learning and imposing
fairness constraints during learning is a common approach. However, many
datasets are imbalanced in certain label classes (e.g. "healthy") and sensitive
subgroups (e.g. "older patients"). Empirically, this imbalance leads to a lack
of generalizability not only of classification, but also of fairness
properties, especially in over-parameterized models. For example,
fairness-aware training may ensure equalized odds (EO) on the training data,
but EO is far from being satisfied on new users. In this paper, we propose a
theoretically-principled, yet Flexible approach that is
Imbalance-Fairness-Aware (FIFA). Specifically, FIFA encourages both
classification and fairness generalization and can be flexibly combined with
many existing fair learning methods with logits-based losses. While our main
focus is on EO, FIFA can be directly applied to achieve equalized opportunity
(EqOpt); and under certain conditions, it can also be applied to other fairness
notions. We demonstrate the power of FIFA by combining it with a popular fair
classification algorithm, and the resulting algorithm achieves significantly
better fairness generalization on several real-world datasets.
Related papers
- Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Towards A Holistic View of Bias in Machine Learning: Bridging
Algorithmic Fairness and Imbalanced Learning [8.602734307457387]
A key element in achieving algorithmic fairness with respect to protected groups is the simultaneous reduction of class and protected group imbalance in the underlying training data.
We propose a novel oversampling algorithm, Fair Oversampling, that addresses both skewed class distributions and protected features.
arXiv Detail & Related papers (2022-07-13T09:48:52Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Parity-based Cumulative Fairness-aware Boosting [7.824964622317634]
Data-driven AI systems can lead to discrimination on the basis of protected attributes like gender or race.
We propose AdaFair, a fairness-aware boosting ensemble that changes the data distribution at each round.
Our experiments show that our approach can achieve parity in terms of statistical parity, equal opportunity, and disparate mistreatment.
arXiv Detail & Related papers (2022-01-04T14:16:36Z) - Improving Fairness via Federated Learning [14.231231094281362]
We propose a new theoretical framework, with which we analyze the value of federated learning in improving fairness.
We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data.
To resolve this, we propose FedFB, a private fair learning algorithm on decentralized data with a modified FedAvg protocol.
arXiv Detail & Related papers (2021-10-29T05:25:44Z) - FairFed: Enabling Group Fairness in Federated Learning [22.913999279079878]
Federated learning has been viewed as a promising solution for learning machine learning models among multiple parties.
We propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method.
Our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
arXiv Detail & Related papers (2021-10-02T17:55:20Z) - FairBalance: How to Achieve Equalized Odds With Data Pre-processing [15.392349679172707]
This research seeks to benefit the software engineering society by providing a simple yet effective pre-processing approach to achieve equalized odds fairness in machine learning software.
We proposed FairBalance, a pre-processing algorithm which balances the class distribution in each demographic group by assigning calculated weights to the training data.
arXiv Detail & Related papers (2021-07-17T20:40:45Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.