Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization
- URL: http://arxiv.org/abs/2405.18890v1
- Date: Wed, 29 May 2024 08:46:21 GMT
- Title: Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization
- Authors: Ziqing Fan, Shengchao Hu, Jiangchao Yao, Gang Niu, Ya Zhang, Masashi Sugiyama, Yanfeng Wang,
- Abstract summary: In federated learning (FL), the multi-step update and data heterogeneity among clients often lead to a loss landscape with sharper minima.
We propose FedLESAM, a novel algorithm that locally estimates the direction of global perturbation on client side.
- Score: 81.32266996009575
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In federated learning (FL), the multi-step update and data heterogeneity among clients often lead to a loss landscape with sharper minima, degenerating the performance of the resulted global model. Prevalent federated approaches incorporate sharpness-aware minimization (SAM) into local training to mitigate this problem. However, the local loss landscapes may not accurately reflect the flatness of global loss landscape in heterogeneous environments; as a result, minimizing local sharpness and calculating perturbations on client data might not align the efficacy of SAM in FL with centralized training. To overcome this challenge, we propose FedLESAM, a novel algorithm that locally estimates the direction of global perturbation on client side as the difference between global models received in the previous active and current rounds. Besides the improved quality, FedLESAM also speed up federated SAM-based approaches since it only performs once backpropagation in each iteration. Theoretically, we prove a slightly tighter bound than its original FedSAM by ensuring consistent perturbation. Empirically, we conduct comprehensive experiments on four federated benchmark datasets under three partition strategies to demonstrate the superior performance and efficiency of FedLESAM.
Related papers
- Neighborhood and Global Perturbations Supported SAM in Federated Learning: From Local Tweaks To Global Awareness [29.679323144520037]
Federated Learning (FL) can be coordinated under the orchestration of a central server to build a privacy-preserving model.
We propose a novel FL algorithm, FedTOGA, designed to consider generalization objectives while maintaining minimal uplink communication overhead.
arXiv Detail & Related papers (2024-08-26T09:42:18Z) - Tackling Data Heterogeneity in Federated Learning via Loss Decomposition [19.755512688738506]
We analyze how Federated Learning (FL) training influence FL performance by decomposing the global loss into three terms: local loss, distribution shift loss and aggregation loss.
We propose a novel FL method based on global loss decomposition, called FedLD, to jointly reduce these three loss terms.
Our strategies achieve better and more robust performance on retinal and chest X-ray classification compared to other FL algorithms.
arXiv Detail & Related papers (2024-08-22T11:18:11Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape [59.841889495864386]
In federated learning (FL), a cluster of local clients are chaired under the coordination of a global server.
Clients are prone to overfit into their own optima, which extremely deviates from the global objective.
ttfamily FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective.
Our theoretical analysis indicates that ttfamily FedSMOO achieves fast $mathcalO (1/T)$ convergence rate with low bound generalization.
arXiv Detail & Related papers (2023-05-19T10:47:44Z) - FedSpeed: Larger Local Interval, Less Communication Round, and Higher
Generalization Accuracy [84.45004766136663]
Federated learning is an emerging distributed machine learning framework.
It suffers from the non-vanishing biases introduced by the local inconsistent optimal and the rugged client-drifts by the local over-fitting.
We propose a novel and practical method, FedSpeed, to alleviate the negative impacts posed by these problems.
arXiv Detail & Related papers (2023-02-21T03:55:29Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - FedMR: Fedreated Learning via Model Recombination [7.404225808071622]
Federated Learning (FL) enables global model training across clients without compromising their confidential local data.
Existing FL methods rely on Federated Averaging (FedAvg)-based aggregation.
This paper proposes a novel and effective FL paradigm named FedMR (Federating Model Recombination)
arXiv Detail & Related papers (2022-08-16T11:30:19Z) - Generalized Federated Learning via Sharpness Aware Minimization [22.294290071999736]
We propose a general, effective algorithm, textttFedSAM, based on Sharpness Aware Minimization (SAM) local, and develop a momentum FL algorithm to bridge local and global models.
Empirically, our proposed algorithms substantially outperform existing FL studies and significantly decrease the learning deviation.
arXiv Detail & Related papers (2022-06-06T13:54:41Z) - Heterogeneous Federated Learning via Grouped Sequential-to-Parallel
Training [60.892342868936865]
Federated learning (FL) is a rapidly growing privacy-preserving collaborative machine learning paradigm.
We propose a data heterogeneous-robust FL approach, FedGSP, to address this challenge.
We show that FedGSP improves the accuracy by 3.7% on average compared with seven state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-31T03:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.