Normalization Layers Are All That Sharpness-Aware Minimization Needs
- URL: http://arxiv.org/abs/2306.04226v2
- Date: Fri, 17 Nov 2023 08:23:05 GMT
- Title: Normalization Layers Are All That Sharpness-Aware Minimization Needs
- Authors: Maximilian Mueller, Tiffany Vlaar, David Rolnick, Matthias Hein
- Abstract summary: Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima.
We show that perturbing only the affine normalization parameters (typically comprising 0.1% of the total parameters) in the adversarial step of SAM can outperform perturbing all of the parameters.
- Score: 53.799769473526275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima
and has been shown to enhance generalization performance in various settings.
In this work we show that perturbing only the affine normalization parameters
(typically comprising 0.1% of the total parameters) in the adversarial step of
SAM can outperform perturbing all of the parameters.This finding generalizes to
different SAM variants and both ResNet (Batch Normalization) and Vision
Transformer (Layer Normalization) architectures. We consider alternative sparse
perturbation approaches and find that these do not achieve similar performance
enhancement at such extreme sparsity levels, showing that this behaviour is
unique to the normalization layers. Although our findings reaffirm the
effectiveness of SAM in improving generalization performance, they cast doubt
on whether this is solely caused by reduced sharpness.
Related papers
- 1st-Order Magic: Analysis of Sharpness-Aware Minimization [0.0]
Sharpness-Aware Minimization (SAM) is an optimization technique designed to improve generalization by favoring flatter loss minima.
We find that more precise approximations of the proposed SAM objective degrade generalization performance.
This highlights a gap in our understanding of SAM's effectiveness and calls for further investigation into the role of approximations in optimization.
arXiv Detail & Related papers (2024-11-03T23:50:34Z) - $\boldsymbolμ\mathbf{P^2}$: Effective Sharpness Aware Minimization Requires Layerwise Perturbation Scaling [49.25546155981064]
We study the infinite-width limit of neural networks trained with Sharpness Aware Minimization (SAM)
Our findings reveal that the dynamics of standard SAM effectively reduce to applying SAM solely in the last layer in wide neural networks.
In contrast, we identify a stable parameterization with layerwise scaling perturbation, which we call $textitMaximal Update and Perturbation $ ($mu$P$2$), that ensures all layers are both feature learning and effectively perturbed in the limit.
arXiv Detail & Related papers (2024-10-31T16:32:04Z) - A Universal Class of Sharpness-Aware Minimization Algorithms [57.29207151446387]
We introduce a new class of sharpness measures, leading to new sharpness-aware objective functions.
We prove that these measures are textitly expressive, allowing any function of the training loss Hessian matrix to be represented by appropriate hyper and determinants.
arXiv Detail & Related papers (2024-06-06T01:52:09Z) - Friendly Sharpness-Aware Minimization [62.57515991835801]
Sharpness-Aware Minimization (SAM) has been instrumental in improving deep neural network training by minimizing both training loss and loss sharpness.
We investigate the key role of batch-specific gradient noise within the adversarial perturbation, i.e., the current minibatch gradient.
By decomposing the adversarial gradient noise components, we discover that relying solely on the full gradient degrades generalization while excluding it leads to improved performance.
arXiv Detail & Related papers (2024-03-19T01:39:33Z) - Enhancing Sharpness-Aware Optimization Through Variance Suppression [48.908966673827734]
This work embraces the geometry of the loss function, where neighborhoods of 'flat minima' heighten generalization ability.
It seeks 'flat valleys' by minimizing the maximum loss caused by an adversary perturbing parameters within the neighborhood.
Although critical to account for sharpness of the loss function, such an 'over-friendly adversary' can curtail the outmost level of generalization.
arXiv Detail & Related papers (2023-09-27T13:18:23Z) - Improved Deep Neural Network Generalization Using m-Sharpness-Aware
Minimization [14.40189851070842]
Sharpness-Aware Minimization (SAM) modifies the underlying loss function to guide descent methods towards flatter minima.
Recent work suggests that mSAM can outperform SAM in terms of test accuracy.
This paper presents a comprehensive empirical evaluation of mSAM on various tasks and datasets.
arXiv Detail & Related papers (2022-12-07T00:37:55Z) - Efficient Sharpness-aware Minimization for Improved Training of Neural
Networks [146.2011175973769]
This paper proposes Efficient Sharpness Aware Minimizer (M) which boosts SAM s efficiency at no cost to its generalization performance.
M includes two novel and efficient training strategies-StochasticWeight Perturbation and Sharpness-Sensitive Data Selection.
We show, via extensive experiments on the CIFAR and ImageNet datasets, that ESAM enhances the efficiency over SAM from requiring 100% extra computations to 40% vis-a-vis bases.
arXiv Detail & Related papers (2021-10-07T02:20:37Z) - ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning
of Deep Neural Networks [2.8292841621378844]
We introduce the concept of adaptive sharpness which is scale-invariant and propose the corresponding generalization bound.
We suggest a novel learning method, adaptive sharpness-aware minimization (ASAM), utilizing the proposed generalization bound.
Experimental results in various benchmark datasets show that ASAM contributes to significant improvement of model generalization performance.
arXiv Detail & Related papers (2021-02-23T10:26:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.