Dynamic Attention-based Communication-Efficient Federated Learning
- URL: http://arxiv.org/abs/2108.05765v1
- Date: Thu, 12 Aug 2021 14:18:05 GMT
- Title: Dynamic Attention-based Communication-Efficient Federated Learning
- Authors: Zihan Chen, Kai Fong Ernest Chong, Tony Q. S. Quek
- Abstract summary: Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
- Score: 85.18941440826309
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) offers a solution to train a global machine learning
model while still maintaining data privacy, without needing access to data
stored locally at the clients. However, FL suffers performance degradation when
client data distribution is non-IID, and a longer training duration to combat
this degradation may not necessarily be feasible due to communication
limitations. To address this challenge, we propose a new adaptive training
algorithm $\texttt{AdaFL}$, which comprises two components: (i) an
attention-based client selection mechanism for a fairer training scheme among
the clients; and (ii) a dynamic fraction method to balance the trade-off
between performance stability and communication efficiency. Experimental
results show that our $\texttt{AdaFL}$ algorithm outperforms the usual
$\texttt{FedAvg}$ algorithm, and can be incorporated to further improve various
state-of-the-art FL algorithms, with respect to three aspects: model accuracy,
performance stability, and communication efficiency.
Related papers
- FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - SemiSFL: Split Federated Learning on Unlabeled and Non-IID Data [34.49090830845118]
Federated Learning (FL) has emerged to allow multiple clients to collaboratively train machine learning models on their private data at the network edge.
We propose a novel Semi-supervised SFL system, termed SemiSFL, which incorporates clustering regularization to perform SFL with unlabeled and non-IID client data.
Our system provides a 3.8x speed-up in training time, reduces the communication cost by about 70.3% while reaching the target accuracy, and achieves up to 5.8% improvement in accuracy under non-IID scenarios.
arXiv Detail & Related papers (2023-07-29T02:35:37Z) - DynamicFL: Balancing Communication Dynamics and Client Manipulation for
Federated Learning [6.9138560535971605]
Federated Learning (FL) aims to train a global model by exploiting the decentralized data across millions of edge devices.
Given the geo-distributed edge devices with highly dynamic networks in the wild, aggregating all the model updates from those participating devices will result in inevitable long-tail delays in FL.
We propose a novel FL framework, DynamicFL, by considering the communication dynamics and data quality across massive edge devices with a specially designed client manipulation strategy.
arXiv Detail & Related papers (2023-07-16T19:09:31Z) - Efficient Adaptive Federated Optimization of Federated Learning for IoT [0.0]
This paper proposes a novel efficient adaptive federated optimization (EAFO) algorithm to improve efficiency of Federated Learning (FL)
FL is a distributed privacy-preserving learning framework that enables IoT devices to train global model through sharing model parameters.
Experiment results show that the proposed EAFO can achieve higher accuracies faster.
arXiv Detail & Related papers (2022-06-23T01:49:12Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.