FairNN- Conjoint Learning of Fair Representations for Fair Decisions
- URL: http://arxiv.org/abs/2004.02173v2
- Date: Sat, 11 Apr 2020 20:00:16 GMT
- Title: FairNN- Conjoint Learning of Fair Representations for Fair Decisions
- Authors: Tongxin Hu, Vasileios Iosifidis, Wentong Liao, Hang Zhang, Michael
YingYang, Eirini Ntoutsi, and Bodo Rosenhahn
- Abstract summary: We propose FairNN a neural network that performs joint feature representation and classification for fairness-aware learning.
Our experiments on a variety of datasets demonstrate that such a joint approach is superior to separate treatment of unfairness in representation learning or supervised learning.
- Score: 40.05268461544044
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose FairNN a neural network that performs joint feature
representation and classification for fairness-aware learning. Our approach
optimizes a multi-objective loss function in which (a) learns a fair
representation by suppressing protected attributes (b) maintains the
information content by minimizing a reconstruction loss and (c) allows for
solving a classification task in a fair manner by minimizing the classification
error and respecting the equalized odds-based fairness regularized. Our
experiments on a variety of datasets demonstrate that such a joint approach is
superior to separate treatment of unfairness in representation learning or
supervised learning. Additionally, our regularizers can be adaptively weighted
to balance the different components of the loss function, thus allowing for a
very general framework for conjoint fair representation learning and decision
making.
Related papers
- Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - Conditional Supervised Contrastive Learning for Fair Text Classification [59.813422435604025]
We study learning fair representations that satisfy a notion of fairness known as equalized odds for text classification via contrastive learning.
Specifically, we first theoretically analyze the connections between learning representations with a fairness constraint and conditional supervised contrastive objectives.
arXiv Detail & Related papers (2022-05-23T17:38:30Z) - Fair Contrastive Learning for Facial Attribute Classification [25.436462696033846]
We propose a new Fair Supervised Contrastive Loss (FSCL) for fair visual representation learning.
In this paper, we for the first time analyze unfairness caused by supervised contrastive learning.
Our method is robust to the intensity of data bias and effectively works in incomplete supervised settings.
arXiv Detail & Related papers (2022-03-30T11:16:18Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Spatial Contrastive Learning for Few-Shot Classification [9.66840768820136]
We propose a novel attention-based spatial contrastive objective to learn locally discriminative and class-agnostic features.
With extensive experiments, we show that the proposed method outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2020-12-26T23:39:41Z) - Adversarial Training Reduces Information and Improves Transferability [81.59364510580738]
Recent results show that features of adversarially trained networks for classification, in addition to being robust, enable desirable properties such as invertibility.
We show that the Adversarial Training can improve linear transferability to new tasks, from which arises a new trade-off between transferability of representations and accuracy on the source task.
arXiv Detail & Related papers (2020-07-22T08:30:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.