FedFair: Training Fair Models In Cross-Silo Federated Learning
- URL: http://arxiv.org/abs/2109.05662v1
- Date: Mon, 13 Sep 2021 01:30:04 GMT
- Title: FedFair: Training Fair Models In Cross-Silo Federated Learning
- Authors: Lingyang Chu, Lanjun Wang, Yanjie Dong, Jian Pei, Zirui Zhou, Yong
Zhang
- Abstract summary: We develop FedFair, a well-designed federated learning framework, which can successfully train a fair model with high performance without any data privacy infringement.
Our experiments on three real-world data sets demonstrate the excellent fair model training performance of our method.
- Score: 47.63052284529811
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building fair machine learning models becomes more and more important. As
many powerful models are built by collaboration among multiple parties, each
holding some sensitive data, it is natural to explore the feasibility of
training fair models in cross-silo federated learning so that fairness, privacy
and collaboration can be fully respected simultaneously. However, it is a very
challenging task, since it is far from trivial to accurately estimate the
fairness of a model without knowing the private data of the participating
parties. In this paper, we first propose a federated estimation method to
accurately estimate the fairness of a model without infringing the data privacy
of any party. Then, we use the fairness estimation to formulate a novel problem
of training fair models in cross-silo federated learning. We develop FedFair, a
well-designed federated learning framework, which can successfully train a fair
model with high performance without any data privacy infringement. Our
extensive experiments on three real-world data sets demonstrate the excellent
fair model training performance of our method.
Related papers
- Enhancing Fairness in Neural Networks Using FairVIC [0.0]
Mitigating bias in automated decision-making systems, specifically deep learning models, is a critical challenge in achieving fairness.
We introduce FairVIC, an innovative approach designed to enhance fairness in neural networks by addressing inherent biases at the training stage.
We observe a significant improvement in fairness across all metrics tested, without compromising the model's accuracy to a detrimental extent.
arXiv Detail & Related papers (2024-04-28T10:10:21Z) - Fantastic Gains and Where to Find Them: On the Existence and Prospect of
General Knowledge Transfer between Any Pretrained Model [74.62272538148245]
We show that for arbitrary pairings of pretrained models, one model extracts significant data context unavailable in the other.
We investigate if it is possible to transfer such "complementary" knowledge from one model to another without performance degradation.
arXiv Detail & Related papers (2023-10-26T17:59:46Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Accelerating Fair Federated Learning: Adaptive Federated Adam [0.0]
When datasets are not independent and identically distributed (non-IID), models trained by naive federated algorithms may be biased towards certain participants.
This is known as the fairness problem in federated learning.
We propose Adaptive Federated Adam (AdaFedAdam) to accelerate fair federated learning with alleviated bias.
arXiv Detail & Related papers (2023-01-23T10:56:12Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Model-Contrastive Federated Learning [92.9075661456444]
Federated learning enables multiple parties to collaboratively train a machine learning model without communicating their local data.
We propose MOON: model-contrastive federated learning.
Our experiments show that MOON significantly outperforms the other state-of-the-art federated learning algorithms on various image classification tasks.
arXiv Detail & Related papers (2021-03-30T11:16:57Z) - Fairness-aware Agnostic Federated Learning [47.26747955026486]
We develop a fairness-aware agnostic federated learning framework (AgnosticFair) to deal with the challenge of unknown testing distribution.
We use kernel reweighing functions to assign a reweighing value on each training sample in both loss function and fairness constraint.
Built model can be directly applied to local sites as it guarantees fairness on local data distributions.
arXiv Detail & Related papers (2020-10-10T17:58:20Z) - Collaborative Fairness in Federated Learning [24.7378023761443]
We propose a novel Collaborative Fair Federated Learning (CFFL) framework for deep learning.
CFFL enforces participants to converge to different models, thus achieving fairness without compromising predictive performance.
Experiments on benchmark datasets demonstrate that CFFL achieves high fairness and delivers comparable accuracy to the Distributed framework.
arXiv Detail & Related papers (2020-08-27T14:39:09Z) - FR-Train: A Mutual Information-Based Approach to Fair and Robust
Training [33.385118640843416]
We propose FR-Train, which holistically performs fair and robust model training.
In our experiments, FR-Train shows almost no decrease in fairness and accuracy in the presence of data poisoning.
arXiv Detail & Related papers (2020-02-24T13:37:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.