EvoFed: Leveraging Evolutionary Strategies for Communication-Efficient
Federated Learning
- URL: http://arxiv.org/abs/2311.07485v1
- Date: Mon, 13 Nov 2023 17:25:06 GMT
- Title: EvoFed: Leveraging Evolutionary Strategies for Communication-Efficient
Federated Learning
- Authors: Mohammad Mahdi Rahimi, Hasnain Irshad Bhatti, Younghyun Park, Humaira
Kousar, Jaekyun Moon
- Abstract summary: Federated Learning (FL) is a decentralized machine learning paradigm that enables collaborative model training across dispersed nodes.
This paper presents EvoFed, a novel approach that integrates Evolutionary Strategies (ES) with FL to address these challenges.
- Score: 15.124439914522693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is a decentralized machine learning paradigm that
enables collaborative model training across dispersed nodes without having to
force individual nodes to share data. However, its broad adoption is hindered
by the high communication costs of transmitting a large number of model
parameters. This paper presents EvoFed, a novel approach that integrates
Evolutionary Strategies (ES) with FL to address these challenges. EvoFed
employs a concept of 'fitness-based information sharing', deviating
significantly from the conventional model-based FL. Rather than exchanging the
actual updated model parameters, each node transmits a distance-based
similarity measure between the locally updated model and each member of the
noise-perturbed model population. Each node, as well as the server, generates
an identical population set of perturbed models in a completely synchronized
fashion using the same random seeds. With properly chosen noise variance and
population size, perturbed models can be combined to closely reflect the actual
model updated using the local dataset, allowing the transmitted similarity
measures (or fitness values) to carry nearly the complete information about the
model parameters. As the population size is typically much smaller than the
number of model parameters, the savings in communication load is large. The
server aggregates these fitness values and is able to update the global model.
This global fitness vector is then disseminated back to the nodes, each of
which applies the same update to be synchronized to the global model. Our
analysis shows that EvoFed converges, and our experimental results validate
that at the cost of increased local processing loads, EvoFed achieves
performance comparable to FedAvg while reducing overall communication
requirements drastically in various practical settings.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Improving Group Connectivity for Generalization of Federated Deep
Learning [8.594665698279522]
Federated learning (FL) involves multiple clients collaboratively training a global model via iterative local updates and model fusion.
In this paper, we study and improve FL's generalization through a fundamental connectivity'' perspective.
We propose FedGuCci and FedGuCci+, improving group connectivity for better generalization.
arXiv Detail & Related papers (2024-02-29T08:27:01Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - Federated Learning with Neural Graphical Models [2.2721854258621064]
Federated Learning (FL) addresses the need to create models based on proprietary data.
We develop a FL framework which maintains a global NGM model that learns the averaged information from the local NGM models.
We experimentally demonstrated the use of FedNGMs for extracting insights from CDC's Infant Mortality dataset.
arXiv Detail & Related papers (2023-09-20T23:24:22Z) - FedSoup: Improving Generalization and Personalization in Federated
Learning via Selective Model Interpolation [32.36334319329364]
Cross-silo federated learning (FL) enables the development of machine learning models on datasets distributed across data centers.
Recent research has found that current FL algorithms face a trade-off between local and global performance when confronted with distribution shifts.
We propose a novel federated model soup method to optimize the trade-off between local and global performance.
arXiv Detail & Related papers (2023-07-20T00:07:29Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - FedHM: Efficient Federated Learning for Heterogeneous Models via
Low-rank Factorization [16.704006420306353]
A scalable federated learning framework should address heterogeneous clients equipped with different computation and communication capabilities.
This paper proposes FedHM, a novel federated model compression framework that distributes the heterogeneous low-rank models to clients and then aggregates them into a global full-rank model.
Our solution enables the training of heterogeneous local models with varying computational complexities and aggregates a single global model.
arXiv Detail & Related papers (2021-11-29T16:11:09Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z) - UVeQFed: Universal Vector Quantization for Federated Learning [179.06583469293386]
Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their possibly private labeled data.
In FL, each user trains its copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model.
We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion.
arXiv Detail & Related papers (2020-06-05T07:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.