Communication-Efficient Diffusion Strategy for Performance Improvement
of Federated Learning with Non-IID Data
- URL: http://arxiv.org/abs/2207.07493v1
- Date: Fri, 15 Jul 2022 14:28:41 GMT
- Title: Communication-Efficient Diffusion Strategy for Performance Improvement
of Federated Learning with Non-IID Data
- Authors: Seyoung Ahn, Soohyeong Kim, Yongseok Kwon, Joohan Park, Jiseung Youn
and Sunghyun Cho
- Abstract summary: Federated learning (FL) is a novel learning paradigm that addresses the privacy leakage challenge of centralized learning.
In FL, users with non-independent and identically distributed (non-IID) characteristics can deteriorate the performance of the global model.
We propose a novel diffusion strategy of the machine learning (ML) model (FedDif) to maximize the FL performance with non-IID data.
- Score: 10.112913394578703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a novel learning paradigm that addresses the
privacy leakage challenge of centralized learning. However, in FL, users with
non-independent and identically distributed (non-IID) characteristics can
deteriorate the performance of the global model. Specifically, the global model
suffers from the weight divergence challenge owing to non-IID data. To address
the aforementioned challenge, we propose a novel diffusion strategy of the
machine learning (ML) model (FedDif) to maximize the FL performance with
non-IID data. In FedDif, users spread local models to neighboring users over
D2D communications. FedDif enables the local model to experience different
distributions before parameter aggregation. Furthermore, we theoretically
demonstrate that FedDif can circumvent the weight divergence challenge. On the
theoretical basis, we propose the communication-efficient diffusion strategy of
the ML model, which can determine the trade-off between the learning
performance and communication cost based on auction theory. The performance
evaluation results show that FedDif improves the test accuracy of the global
model by 11% compared to the baseline FL with non-IID settings. Moreover,
FedDif improves communication efficiency in perspective of the number of
transmitted sub-frames and models by 2.77 folds than the latest methods
Related papers
- R-SFLLM: Jamming Resilient Framework for Split Federated Learning with Large Language Models [83.77114091471822]
Split federated learning (SFL) is a compute-efficient paradigm in distributed machine learning (ML)
A challenge in SFL, particularly when deployed over wireless channels, is the susceptibility of transmitted model parameters to adversarial jamming.
This is particularly pronounced for word embedding parameters in large language models (LLMs), which are crucial for language understanding.
A physical layer framework is developed for resilient SFL with LLMs (R-SFLLM) over wireless networks.
arXiv Detail & Related papers (2024-07-16T12:21:29Z) - FedMAP: Unlocking Potential in Personalized Federated Learning through Bi-Level MAP Optimization [11.040916982022978]
Federated Learning (FL) enables collaborative training of machine learning models on decentralized data.
Data across clients often differs significantly due to class imbalance, feature distribution skew, sample size imbalance, and other phenomena.
We propose a novel Bayesian PFL framework using bi-level optimization to tackle the data heterogeneity challenges.
arXiv Detail & Related papers (2024-05-29T11:28:06Z) - Navigating Heterogeneity and Privacy in One-Shot Federated Learning with Diffusion Models [6.921070916461661]
Federated learning (FL) enables multiple clients to train models collectively while preserving data privacy.
One-shot federated learning has emerged as a solution by reducing communication rounds, improving efficiency, and providing better security against eavesdropping attacks.
arXiv Detail & Related papers (2024-05-02T17:26:52Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - One-Shot Sequential Federated Learning for Non-IID Data by Enhancing Local Model Diversity [26.09617693587105]
We improve the one-shot sequential federated learning for non-IID data by proposing a local model diversity-enhancing strategy.
Our method exhibits superior performance to existing one-shot PFL methods and achieves better accuracy compared with state-of-the-art one-shot SFL methods.
arXiv Detail & Related papers (2024-04-18T12:31:48Z) - Towards Robust Federated Learning via Logits Calibration on Non-IID Data [49.286558007937856]
Federated learning (FL) is a privacy-preserving distributed management framework based on collaborative model training of distributed devices in edge networks.
Recent studies have shown that FL is vulnerable to adversarial examples, leading to a significant drop in its performance.
In this work, we adopt the adversarial training (AT) framework to improve the robustness of FL models against adversarial example (AE) attacks.
arXiv Detail & Related papers (2024-03-05T09:18:29Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.