Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework
- URL: http://arxiv.org/abs/2212.01519v1
- Date: Sat, 3 Dec 2022 03:27:51 GMT
- Title: Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework
- Authors: Shuai Wang, Yanqing Xu, Zhiguo Wang, Tsung-Hui Chang, Tony Q. S. Quek,
and Defeng Sun
- Abstract summary: We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
- Score: 82.36466358313025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a novel distributed learning paradigm, federated learning (FL) faces
serious challenges in dealing with massive clients with heterogeneous data
distribution and computation and communication resources. Various
client-variance-reduction schemes and client sampling strategies have been
respectively introduced to improve the robustness of FL. Among others,
primal-dual algorithms such as the alternating direction of method multipliers
(ADMM) have been found being resilient to data distribution and outperform most
of the primal-only FL algorithms. However, the reason behind remains a mystery
still. In this paper, we firstly reveal the fact that the federated ADMM is
essentially a client-variance-reduced algorithm. While this explains the
inherent robustness of federated ADMM, the vanilla version of it lacks the
ability to be adaptive to the degree of client heterogeneity. Besides, the
global model at the server under client sampling is biased which slows down the
practical convergence. To go beyond ADMM, we propose a novel primal-dual FL
algorithm, termed FedVRA, that allows one to adaptively control the
variance-reduction level and biasness of the global model. In addition, FedVRA
unifies several representative FL algorithms in the sense that they are either
special instances of FedVRA or are close to it. Extensions of FedVRA to
semi/un-supervised learning are also presented. Experiments based on
(semi-)supervised image classification tasks demonstrate superiority of FedVRA
over the existing schemes in learning scenarios with massive heterogeneous
clients and client sampling.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the
Power of Heterogeneous Clients [50.13097183691517]
In real-world federated scenarios, there often exist a multitude of heterogeneous clients with varying computation and communication resources.
We propose a novel federated tuning algorithm, FedRA.
In each communication round, FedRA randomly generates an allocation matrix.
It reorganizes a small number of layers from the original model based on the allocation matrix and fine-tunes using adapters.
arXiv Detail & Related papers (2023-11-19T04:43:16Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Confidence-aware Personalized Federated Learning via Variational
Expectation Maximization [34.354154518009956]
We present a novel framework for personalized Federated Learning (PFL)
PFL is a distributed learning scheme to train a shared model across clients.
We present a novel framework for PFL based on hierarchical modeling and variational inference.
arXiv Detail & Related papers (2023-05-21T20:12:27Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Gradient Masked Averaging for Federated Learning [24.687254139644736]
Federated learning allows a large number of clients with heterogeneous data to coordinate learning of a unified global model.
Standard FL algorithms involve averaging of model parameters or gradient updates to approximate the global model at the server.
We propose a gradient masked averaging approach for FL as an alternative to the standard averaging of client updates.
arXiv Detail & Related papers (2022-01-28T08:42:43Z) - FedMM: Saddle Point Optimization for Federated Adversarial Domain
Adaptation [6.3434032890855345]
Federated domain adaptation is a unique minimax training task due to the prevalence of label imbalance among clients.
We propose a distributed minimax domain referred to as FedMM, designed specifically for the federated adversary adaptation problem.
We prove that FedMM ensures convergence to a stationary point with domain-shifted unsupervised data.
arXiv Detail & Related papers (2021-10-16T05:32:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.