Accelerating Fair Federated Learning: Adaptive Federated Adam
- URL: http://arxiv.org/abs/2301.09357v1
- Date: Mon, 23 Jan 2023 10:56:12 GMT
- Title: Accelerating Fair Federated Learning: Adaptive Federated Adam
- Authors: Li Ju, Tianru Zhang, Salman Toor and Andreas Hellander
- Abstract summary: When datasets are not independent and identically distributed (non-IID), models trained by naive federated algorithms may be biased towards certain participants.
This is known as the fairness problem in federated learning.
We propose Adaptive Federated Adam (AdaFedAdam) to accelerate fair federated learning with alleviated bias.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a distributed and privacy-preserving approach to train
a statistical model collaboratively from decentralized data of different
parties. However, when datasets of participants are not independent and
identically distributed (non-IID), models trained by naive federated algorithms
may be biased towards certain participants, and model performance across
participants is non-uniform. This is known as the fairness problem in federated
learning. In this paper, we formulate fairness-controlled federated learning as
a dynamical multi-objective optimization problem to ensure fair performance
across all participants. To solve the problem efficiently, we study the
convergence and bias of Adam as the server optimizer in federated learning, and
propose Adaptive Federated Adam (AdaFedAdam) to accelerate fair federated
learning with alleviated bias. We validated the effectiveness, Pareto
optimality and robustness of AdaFedAdam in numerical experiments and show that
AdaFedAdam outperforms existing algorithms, providing better convergence and
fairness properties of the federated scheme.
Related papers
- Federated Learning with Classifier Shift for Class Imbalance [6.097542448692326]
Federated learning aims to learn a global model collaboratively while the training data belongs to different clients and is not allowed to be exchanged.
This paper proposes a simple and effective approach named FedShift which adds the shift on the classifier output during the local training phase to alleviate the negative impact of class imbalance.
arXiv Detail & Related papers (2023-04-11T04:38:39Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - CoFED: Cross-silo Heterogeneous Federated Multi-task Learning via
Co-training [11.198612582299813]
Federated Learning (FL) is a machine learning technique that enables participants to train high-quality models collaboratively without exchanging their private data.
We propose a communication-efficient FL scheme, CoFED, based on pseudo-labeling unlabeled data like co-training.
Experimental results show that CoFED achieves better performance with a lower communication cost.
arXiv Detail & Related papers (2022-02-17T11:34:20Z) - Fair and efficient contribution valuation for vertical federated
learning [49.50442779626123]
Federated learning is a popular technology for training machine learning models on distributed data sources without sharing data.
The Shapley value (SV) is a provably fair contribution valuation metric originated from cooperative game theory.
We propose a contribution valuation metric called vertical federated Shapley value (VerFedSV) based on SV.
arXiv Detail & Related papers (2022-01-07T19:57:15Z) - Improving Fairness via Federated Learning [14.231231094281362]
We propose a new theoretical framework, with which we analyze the value of federated learning in improving fairness.
We then theoretically and empirically show that the performance tradeoff of FedAvg-based fair learning algorithms is strictly worse than that of a fair classifier trained on centralized data.
To resolve this, we propose FedFB, a private fair learning algorithm on decentralized data with a modified FedAvg protocol.
arXiv Detail & Related papers (2021-10-29T05:25:44Z) - FairFed: Enabling Group Fairness in Federated Learning [22.913999279079878]
Federated learning has been viewed as a promising solution for learning machine learning models among multiple parties.
We propose FairFed, a novel algorithm to enhance group fairness via a fairness-aware aggregation method.
Our proposed method outperforms the state-of-the-art fair federated learning frameworks under a high heterogeneous sensitive attribute distribution.
arXiv Detail & Related papers (2021-10-02T17:55:20Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - FedH2L: Federated Learning with Model and Statistical Heterogeneity [75.61234545520611]
Federated learning (FL) enables distributed participants to collectively learn a strong global model without sacrificing their individual data privacy.
We introduce FedH2L, which is agnostic to both the model architecture and robust to different data distributions across participants.
In contrast to approaches sharing parameters or gradients, FedH2L relies on mutual distillation, exchanging only posteriors on a shared seed set between participants in a decentralized manner.
arXiv Detail & Related papers (2021-01-27T10:10:18Z) - Fairness and Accuracy in Federated Learning [17.218814060589956]
This paper proposes an algorithm to achieve more fairness and accuracy in federated learning (FedFa)
It introduces an optimization scheme that employs a double momentum gradient, thereby accelerating the convergence rate of the model.
An appropriate weight selection algorithm that combines the information quantity of training accuracy and training frequency to measure the weights is proposed.
arXiv Detail & Related papers (2020-12-18T06:28:37Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.