Acceleration of Federated Learning with Alleviated Forgetting in Local
Training
- URL: http://arxiv.org/abs/2203.02645v1
- Date: Sat, 5 Mar 2022 02:31:32 GMT
- Title: Acceleration of Federated Learning with Alleviated Forgetting in Local
Training
- Authors: Chencheng Xu, Zhiwei Hong, Minlie Huang, Tao Jiang
- Abstract summary: Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
- Score: 61.231021417674235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables distributed optimization of machine learning
models while protecting privacy by independently training local models on each
client and then aggregating parameters on a central server, thereby producing
an effective global model. Although a variety of FL algorithms have been
proposed, their training efficiency remains low when the data are not
independently and identically distributed (non-i.i.d.) across different
clients. We observe that the slow convergence rates of the existing methods are
(at least partially) caused by the catastrophic forgetting issue during the
local training stage on each individual client, which leads to a large increase
in the loss function concerning the previous training data at the other
clients. Here, we propose FedReg, an algorithm to accelerate FL with alleviated
knowledge forgetting in the local training stage by regularizing locally
trained parameters with the loss on generated pseudo data, which encode the
knowledge of previous training data learned by the global model. Our
comprehensive experiments demonstrate that FedReg not only significantly
improves the convergence rate of FL, especially when the neural network
architecture is deep and the clients' data are extremely non-i.i.d., but is
also able to protect privacy better in classification problems and more robust
against gradient inversion attacks. The code is available at:
https://github.com/Zoesgithub/FedReg.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Achieving Linear Speedup in Asynchronous Federated Learning with
Heterogeneous Clients [30.135431295658343]
Federated learning (FL) aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
In this paper, we propose an efficient federated learning (AFL) framework called DeFedAvg.
DeFedAvg is the first AFL algorithm that achieves the desirable linear speedup property, which indicates its high scalability.
arXiv Detail & Related papers (2024-02-17T05:22:46Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedPDC:Federated Learning for Public Dataset Correction [1.5533842336139065]
Federated learning has lower classification accuracy than traditional machine learning in Non-IID scenarios.
New algorithm FedPDC is proposed to optimize the aggregation mode of local models and the loss function of local training.
In many benchmark experiments, FedPDC can effectively improve the accuracy of the global model in the case of extremely unbalanced data distribution.
arXiv Detail & Related papers (2023-02-24T08:09:23Z) - Aergia: Leveraging Heterogeneity in Federated Learning Systems [5.0650178943079]
Federated Learning (FL) relies on clients to update a global model using their local datasets.
Aergia is a novel approach where slow clients freeze the part of their model that is the most computationally intensive to train.
Aergia significantly reduces the training time under heterogeneous settings by up to 27% and 53% compared to FedAvg and TiFL, respectively.
arXiv Detail & Related papers (2022-10-12T12:59:18Z) - Federated Adversarial Learning: A Framework with Convergence Analysis [28.136498729360504]
Federated learning (FL) is a trending training paradigm to utilize decentralized training data.
FL allows clients to update model parameters locally for several epochs, then share them to a global model for aggregation.
This training paradigm with multi-local step updating before aggregation exposes unique vulnerabilities to adversarial attacks.
arXiv Detail & Related papers (2022-08-07T04:17:34Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.