Improving Generalization in Federated Learning by Seeking Flat Minima
- URL: http://arxiv.org/abs/2203.11834v2
- Date: Thu, 24 Mar 2022 10:30:14 GMT
- Title: Improving Generalization in Federated Learning by Seeking Flat Minima
- Authors: Debora Caldarola, Barbara Caputo, Marco Ciccone
- Abstract summary: Models trained in federated settings often suffer from degraded performances and fail at generalizing.
In this work, we investigate such behavior through the lens of geometry of the loss and Hessian eigenspectrum.
Motivated by prior studies connecting the sharpness of the loss surface and the generalization gap, we show that i) training clients locally with Sharpness-Aware Minimization (SAM) or its adaptive version (ASAM) on the server-side can substantially improve generalization.
- Score: 23.937135834522145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Models trained in federated settings often suffer from degraded performances
and fail at generalizing, especially when facing heterogeneous scenarios. In
this work, we investigate such behavior through the lens of geometry of the
loss and Hessian eigenspectrum, linking the model's lack of generalization
capacity to the sharpness of the solution. Motivated by prior studies
connecting the sharpness of the loss surface and the generalization gap, we
show that i) training clients locally with Sharpness-Aware Minimization (SAM)
or its adaptive version (ASAM) and ii) averaging stochastic weights (SWA) on
the server-side can substantially improve generalization in Federated Learning
and help bridging the gap with centralized models. By seeking parameters in
neighborhoods having uniform low loss, the model converges towards flatter
minima and its generalization significantly improves in both homogeneous and
heterogeneous scenarios. Empirical results demonstrate the effectiveness of
those optimizers across a variety of benchmark vision datasets (e.g.
CIFAR10/100, Landmarks-User-160k, IDDA) and tasks (large scale classification,
semantic segmentation, domain generalization).
Related papers
- A Simple and Generalist Approach for Panoptic Segmentation [57.94892855772925]
Generalist vision models aim for one and the same architecture for a variety of vision tasks.
While such shared architecture may seem attractive, generalist models tend to be outperformed by their bespoken counterparts.
We address this problem by introducing two key contributions, without compromising the desirable properties of generalist models.
arXiv Detail & Related papers (2024-08-29T13:02:12Z) - Agnostic Sharpness-Aware Minimization [29.641227264358704]
Sharpness-aware (SAM) has been instrumental in improving deep neural network training by minimizing both the training loss and the sharpness of the loss landscape.
Model-Agnostic Meta-Learning (MAML) is a framework designed to improve the adaptability of models.
We introduce Agnostic-SAM, a novel approach that combines the principles of both SAM and MAML.
arXiv Detail & Related papers (2024-06-11T09:49:00Z) - Mitigate Domain Shift by Primary-Auxiliary Objectives Association for
Generalizing Person ReID [39.98444065846305]
ReID models struggle in learning domain-invariant representation solely through training on an instance classification objective.
We introduce a method that guides model learning of the primary ReID instance classification objective by a concurrent auxiliary learning objective on weakly labeled pedestrian saliency detection.
Our model can be extended with the recent test-time diagram to form the PAOA+, which performs on-the-fly optimization against the auxiliary objective.
arXiv Detail & Related papers (2023-10-24T15:15:57Z) - Gradient constrained sharpness-aware prompt learning for vision-language
models [99.74832984957025]
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM)
By analyzing the loss landscapes of the state-of-the-art method and vanilla Sharpness-aware Minimization (SAM) based method, we conclude that the trade-off performance correlates to both loss value and loss sharpness.
We propose a novel SAM-based method for prompt learning, denoted as Gradient Constrained Sharpness-aware Context Optimization (GCSCoOp)
arXiv Detail & Related papers (2023-09-14T17:13:54Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Normalization Layers Are All That Sharpness-Aware Minimization Needs [53.799769473526275]
Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima.
We show that perturbing only the affine normalization parameters (typically comprising 0.1% of the total parameters) in the adversarial step of SAM can outperform perturbing all of the parameters.
arXiv Detail & Related papers (2023-06-07T08:05:46Z) - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape [59.841889495864386]
In federated learning (FL), a cluster of local clients are chaired under the coordination of a global server.
Clients are prone to overfit into their own optima, which extremely deviates from the global objective.
ttfamily FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective.
Our theoretical analysis indicates that ttfamily FedSMOO achieves fast $mathcalO (1/T)$ convergence rate with low bound generalization.
arXiv Detail & Related papers (2023-05-19T10:47:44Z) - Sharpness-Aware Gradient Matching for Domain Generalization [84.14789746460197]
The goal of domain generalization (DG) is to enhance the generalization capability of the model learned from a source domain to other unseen domains.
The recently developed Sharpness-Aware Minimization (SAM) method aims to achieve this goal by minimizing the sharpness measure of the loss landscape.
We present two conditions to ensure that the model could converge to a flat minimum with a small loss, and present an algorithm, named Sharpness-Aware Gradient Matching (SAGM)
Our proposed SAGM method consistently outperforms the state-of-the-art methods on five DG benchmarks.
arXiv Detail & Related papers (2023-03-18T07:25:12Z) - ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning
of Deep Neural Networks [2.8292841621378844]
We introduce the concept of adaptive sharpness which is scale-invariant and propose the corresponding generalization bound.
We suggest a novel learning method, adaptive sharpness-aware minimization (ASAM), utilizing the proposed generalization bound.
Experimental results in various benchmark datasets show that ASAM contributes to significant improvement of model generalization performance.
arXiv Detail & Related papers (2021-02-23T10:26:54Z) - Sharpness-Aware Minimization for Efficiently Improving Generalization [36.87818971067698]
We introduce a novel, effective procedure for simultaneously minimizing loss value and loss sharpness.
Sharpness-Aware Minimization (SAM) seeks parameters that lie in neighborhoods having uniformly low loss.
We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets.
arXiv Detail & Related papers (2020-10-03T19:02:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.