Mobilizing Personalized Federated Learning in Infrastructure-Less and
Heterogeneous Environments via Random Walk Stochastic ADMM
- URL: http://arxiv.org/abs/2304.12534v3
- Date: Tue, 26 Sep 2023 22:21:18 GMT
- Title: Mobilizing Personalized Federated Learning in Infrastructure-Less and
Heterogeneous Environments via Random Walk Stochastic ADMM
- Authors: Ziba Parsons, Fei Dou, Houyi Du, Zheng Song, Jin Lu
- Abstract summary: This paper explores the challenges of implementing Federated Learning (FL) in practical scenarios featuring isolated nodes with data heterogeneity.
To overcome these challenges, we propose a novel mobilizing personalized FL approach, which aims to facilitate mobility and resilience.
We develop a novel optimization algorithm called Random Walk Alternating Direction Method of Multipliers (RWSADMM)
- Score: 0.14597673707346284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores the challenges of implementing Federated Learning (FL) in
practical scenarios featuring isolated nodes with data heterogeneity, which can
only be connected to the server through wireless links in an
infrastructure-less environment. To overcome these challenges, we propose a
novel mobilizing personalized FL approach, which aims to facilitate mobility
and resilience. Specifically, we develop a novel optimization algorithm called
Random Walk Stochastic Alternating Direction Method of Multipliers (RWSADMM).
RWSADMM capitalizes on the server's random movement toward clients and
formulates local proximity among their adjacent clients based on hard
inequality constraints rather than requiring consensus updates or introducing
bias via regularization methods. To mitigate the computational burden on the
clients, an efficient stochastic solver of the approximated optimization
problem is designed in RWSADMM, which provably converges to the stationary
point almost surely in expectation. Our theoretical and empirical results
demonstrate the provable fast convergence and substantial accuracy improvements
achieved by RWSADMM compared to baseline methods, along with its benefits of
reduced communication costs and enhanced scalability.
Related papers
- Aiding Global Convergence in Federated Learning via Local Perturbation and Mutual Similarity Information [6.767885381740953]
Federated learning has emerged as a distributed optimization paradigm.
We propose a novel modified framework wherein each client locally performs a perturbed gradient step.
We show that our algorithm speeds convergence up to a margin of 30 global rounds compared with FedAvg.
arXiv Detail & Related papers (2024-10-07T23:14:05Z) - FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - FedADMM-InSa: An Inexact and Self-Adaptive ADMM for Federated Learning [1.802525429431034]
We propose an inexact and self-adaptive FedADMM algorithm, termed FedADMM-InSa.
The convergence of the resulting inexact ADMM is proved under the assumption of strongly convex loss functions.
Our proposed algorithm can reduce the clients' local computational load significantly and also accelerate the learning process compared to the vanilla FedADMM.
arXiv Detail & Related papers (2024-02-21T18:19:20Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FedAgg: Adaptive Federated Learning with Aggregated Gradients [1.5653612447564105]
We propose an adaptive FEDerated learning algorithm called FedAgg to alleviate the divergence between the local and average model parameters and obtain a fast model convergence rate.
We show that our framework is superior to existing state-of-the-art FL strategies for enhancing model performance and accelerating convergence rate under IID and Non-IID datasets.
arXiv Detail & Related papers (2023-03-28T08:07:28Z) - Adaptive Federated Learning via New Entropy Approach [14.595709494370372]
Federated Learning (FL) has emerged as a prominent distributed machine learning framework.
In this paper, we propose an adaptive FEDerated learning algorithm based on ENTropy theory (FedEnt) to alleviate the parameter deviation among heterogeneous clients.
arXiv Detail & Related papers (2023-03-27T07:57:04Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Optimization-driven Machine Learning for Intelligent Reflecting Surfaces
Assisted Wireless Networks [82.33619654835348]
Intelligent surface (IRS) has been employed to reshape the wireless channels by controlling individual scattering elements' phase shifts.
Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity.
In this article, we focus on machine learning (ML) approaches for performance in IRS-assisted wireless networks.
arXiv Detail & Related papers (2020-08-29T08:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.