RingFed: Reducing Communication Costs in Federated Learning on Non-IID
Data
- URL: http://arxiv.org/abs/2107.08873v1
- Date: Mon, 19 Jul 2021 13:43:10 GMT
- Title: RingFed: Reducing Communication Costs in Federated Learning on Non-IID
Data
- Authors: Guang Yang, Ke Mu, Chunhe Song, Zhijia Yang, and Tierui Gong
- Abstract summary: Federated learning is used to protect the privacy of each client by exchanging model parameters rather than raw data.
This article proposes RingFed, a novel framework to reduce communication overhead during the training process of federated learning.
Experiments on two different public datasets show that RingFed has fast convergence, high model accuracy, and low communication cost.
- Score: 3.7416826310878024
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is a widely used distributed deep learning framework that
protects the privacy of each client by exchanging model parameters rather than
raw data. However, federated learning suffers from high communication costs, as
a considerable number of model parameters need to be transmitted many times
during the training process, making the approach inefficient, especially when
the communication network bandwidth is limited. This article proposes RingFed,
a novel framework to reduce communication overhead during the training process
of federated learning. Rather than transmitting parameters between the center
server and each client, as in original federated learning, in the proposed
RingFed, the updated parameters are transmitted between each client in turn,
and only the final result is transmitted to the central server, thereby
reducing the communication overhead substantially. After several local updates,
clients first send their parameters to another proximal client, not to the
center server directly, to preaggregate. Experiments on two different public
datasets show that RingFed has fast convergence, high model accuracy, and low
communication cost.
Related papers
- An Adaptive Clustering Scheme for Client Selections in Communication-Efficient Federated Learning [3.683202928838613]
Federated learning is a novel decentralized learning architecture.
We propose to dynamically adjust the number of clusters to find the most ideal grouping results.
It may reduce the number of users participating in the training to achieve the effect of reducing communication costs without affecting the model performance.
arXiv Detail & Related papers (2025-04-11T08:43:12Z) - Communication-Efficient Federated Knowledge Graph Embedding with Entity-Wise Top-K Sparsification [49.66272783945571]
Federated Knowledge Graphs Embedding learning (FKGE) encounters challenges in communication efficiency stemming from the considerable size of parameters and extensive communication rounds.
We propose bidirectional communication-efficient FedS based on Entity-Wise Top-K Sparsification strategy.
arXiv Detail & Related papers (2024-06-19T05:26:02Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - SalientGrads: Sparse Models for Communication Efficient and Data Aware
Distributed Federated Training [1.0413504599164103]
Federated learning (FL) enables the training of a model leveraging decentralized data in client sites while preserving privacy by not collecting data.
One of the significant challenges of FL is limited computation and low communication bandwidth in resource limited edge client nodes.
We propose Salient Grads, which simplifies the process of sparse training by choosing a data aware subnetwork before training.
arXiv Detail & Related papers (2023-04-15T06:46:37Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - FedLite: A Scalable Approach for Federated Learning on
Resource-constrained Clients [41.623518032533035]
In split learning, only a small part of the model is stored and trained on clients while the remaining large part of the model only stays at the servers.
This paper addresses this issue by compressing the additional communication using a novel clustering scheme accompanied by a gradient correction method.
arXiv Detail & Related papers (2022-01-28T00:09:53Z) - SPATL: Salient Parameter Aggregation and Transfer Learning for
Heterogeneous Clients in Federated Learning [3.5394650810262336]
Efficient federated learning is one of the key challenges for training and deploying AI models on edge devices.
Maintaining data privacy in federated learning raises several challenges including data heterogeneity, expensive communication cost, and limited resources.
We propose a salient parameter selection agent based on deep reinforcement learning on local clients, and aggregating the selected salient parameters on the central server.
arXiv Detail & Related papers (2021-11-29T06:28:05Z) - FedKD: Communication Efficient Federated Learning via Knowledge
Distillation [56.886414139084216]
Federated learning is widely used to learn intelligent models from decentralized data.
In federated learning, clients need to communicate their local model updates in each iteration of model learning.
We propose a communication efficient federated learning method based on knowledge distillation.
arXiv Detail & Related papers (2021-08-30T15:39:54Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Coded Federated Learning [5.375775284252717]
Federated learning is a method of training a global model from decentralized data distributed across client devices.
Our results show that CFL allows the global model to converge nearly four times faster when compared to an uncoded approach.
arXiv Detail & Related papers (2020-02-21T23:06:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.