FedNSAM:Consistency of Local and Global Flatness for Federated Learning
- URL: http://arxiv.org/abs/2602.23827v1
- Date: Fri, 27 Feb 2026 09:07:47 GMT
- Title: FedNSAM:Consistency of Local and Global Flatness for Federated Learning
- Authors: Junkang Liu, Fanhua Shang, Yuxuan Tian, Hongying Liu, Yuanyuan Liu,
- Abstract summary: We propose a novel textbfFedNSAM algorithm that accelerates the SAM algorithm by introducing global Nesterov momentum into the local update.<n>textbfFedNSAM uses the global Nesterov momentum as the direction of local estimation of client global perturbations and extrapolation.<n> Empirically, we conduct comprehensive experiments on CNN and Transformer models to verify the superior performance and efficiency of textbfFedNSAM.
- Score: 26.41380732455181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In federated learning (FL), multi-step local updates and data heterogeneity usually lead to sharper global minima, which degrades the performance of the global model. Popular FL algorithms integrate sharpness-aware minimization (SAM) into local training to address this issue. However, in the high data heterogeneity setting, the flatness in local training does not imply the flatness of the global model. Therefore, minimizing the sharpness of the local loss surfaces on the client data does not enable the effectiveness of SAM in FL to improve the generalization ability of the global model. We define the \textbf{flatness distance} to explain this phenomenon. By rethinking the SAM in FL and theoretically analyzing the \textbf{flatness distance}, we propose a novel \textbf{FedNSAM} algorithm that accelerates the SAM algorithm by introducing global Nesterov momentum into the local update to harmonize the consistency of global and local flatness. \textbf{FedNSAM} uses the global Nesterov momentum as the direction of local estimation of client global perturbations and extrapolation. Theoretically, we prove a tighter convergence bound than FedSAM by Nesterov extrapolation. Empirically, we conduct comprehensive experiments on CNN and Transformer models to verify the superior performance and efficiency of \textbf{FedNSAM}. The code is available at https://github.com/junkangLiu0/FedNSAM.
Related papers
- Federated Learning with Feedback Alignment [1.879054346828946]
Federated Learning (FL) enables collaborative training across multiple clients while preserving data privacy.<n>We introduce Federated Learning with Feedback Alignment (FLFA), a novel framework that integrates feedback alignment into FL.<n>FLFA uses the global model's weights as a shared feedback matrix during local training's backward pass, aligning local updates with the global model efficiently.
arXiv Detail & Related papers (2025-12-14T16:59:55Z) - Neighborhood and Global Perturbations Supported SAM in Federated Learning: From Local Tweaks To Global Awareness [29.679323144520037]
Federated Learning (FL) can be coordinated under the orchestration of a central server to build a privacy-preserving model.
We propose a novel FL algorithm, FedTOGA, designed to consider generalization objectives while maintaining minimal uplink communication overhead.
arXiv Detail & Related papers (2024-08-26T09:42:18Z) - Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization [81.32266996009575]
In federated learning (FL), the multi-step update and data heterogeneity among clients often lead to a loss landscape with sharper minima.
We propose FedLESAM, a novel algorithm that locally estimates the direction of global perturbation on client side.
arXiv Detail & Related papers (2024-05-29T08:46:21Z) - Rethinking Client Drift in Federated Learning: A Logit Perspective [125.35844582366441]
Federated Learning (FL) enables multiple clients to collaboratively learn in a distributed way, allowing for privacy protection.
We find that the difference in logits between the local and global models increases as the model is continuously updated.
We propose a new algorithm, named FedCSD, a Class prototype Similarity Distillation in a federated framework to align the local and global models.
arXiv Detail & Related papers (2023-08-20T04:41:01Z) - Understanding How Consistency Works in Federated Learning via Stage-wise
Relaxed Initialization [84.42306265220274]
Federated learning (FL) is a distributed paradigm that coordinates massive local clients to collaboratively train a global model.
Previous works have implicitly studied that FL suffers from the client-drift'' problem, which is caused by the inconsistent optimum across local clients.
To alleviate the negative impact of the client drift'' and explore its substance in FL, we first design an efficient FL algorithm textitFedInit.
arXiv Detail & Related papers (2023-06-09T06:55:15Z) - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape [59.841889495864386]
In federated learning (FL), a cluster of local clients are chaired under the coordination of a global server.
Clients are prone to overfit into their own optima, which extremely deviates from the global objective.
ttfamily FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective.
Our theoretical analysis indicates that ttfamily FedSMOO achieves fast $mathcalO (1/T)$ convergence rate with low bound generalization.
arXiv Detail & Related papers (2023-05-19T10:47:44Z) - FedSpeed: Larger Local Interval, Less Communication Round, and Higher
Generalization Accuracy [84.45004766136663]
Federated learning is an emerging distributed machine learning framework.
It suffers from the non-vanishing biases introduced by the local inconsistent optimal and the rugged client-drifts by the local over-fitting.
We propose a novel and practical method, FedSpeed, to alleviate the negative impacts posed by these problems.
arXiv Detail & Related papers (2023-02-21T03:55:29Z) - Generalized Federated Learning via Sharpness Aware Minimization [22.294290071999736]
We propose a general, effective algorithm, textttFedSAM, based on Sharpness Aware Minimization (SAM) local, and develop a momentum FL algorithm to bridge local and global models.
Empirically, our proposed algorithms substantially outperform existing FL studies and significantly decrease the learning deviation.
arXiv Detail & Related papers (2022-06-06T13:54:41Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z) - Preservation of the Global Knowledge by Not-True Self Knowledge
Distillation in Federated Learning [8.474470736998136]
In Federated Learning (FL), a strong global model is collaboratively learned by aggregating the clients' locally trained models.
We observe that fitting on biased local distribution shifts the feature on global distribution and results in forgetting of global knowledge.
We propose a simple yet effective framework Federated Local Self-Distillation (FedLSD), which utilizes the global knowledge on locally available data.
arXiv Detail & Related papers (2021-06-06T11:51:47Z) - Federated Learning With Quantized Global Model Updates [84.55126371346452]
We study federated learning, which enables mobile devices to utilize their local datasets to train a global model.
We introduce a lossy FL (LFL) algorithm, in which both the global model and the local model updates are quantized before being transmitted.
arXiv Detail & Related papers (2020-06-18T16:55:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.