Efficient Generalization Improvement Guided by Random Weight
Perturbation
- URL: http://arxiv.org/abs/2211.11489v1
- Date: Mon, 21 Nov 2022 14:24:34 GMT
- Title: Efficient Generalization Improvement Guided by Random Weight
Perturbation
- Authors: Tao Li, Weihao Yan, Zehao Lei, Yingwen Wu, Kun Fang, Ming Yang,
Xiaolin Huang
- Abstract summary: Gruesome-aware minimization (SAM) establishes a generic scheme for generalization improvements.
We resort to filter-wise random weight perturbations (RWP) to decouple the nested gradients in SAM.
We achieve very competitive performance on CIFAR and remarkably better performance on ImageNet.
- Score: 24.027159739234524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To fully uncover the great potential of deep neural networks (DNNs), various
learning algorithms have been developed to improve the model's generalization
ability. Recently, sharpness-aware minimization (SAM) establishes a generic
scheme for generalization improvements by minimizing the sharpness measure
within a small neighborhood and achieves state-of-the-art performance. However,
SAM requires two consecutive gradient evaluations for solving the min-max
problem and inevitably doubles the training time. In this paper, we resort to
filter-wise random weight perturbations (RWP) to decouple the nested gradients
in SAM. Different from the small adversarial perturbations in SAM, RWP is
softer and allows a much larger magnitude of perturbations. Specifically, we
jointly optimize the loss function with random perturbations and the original
loss function: the former guides the network towards a wider flat region while
the latter helps recover the necessary local information. These two loss terms
are complementary to each other and mutually independent. Hence, the
corresponding gradients can be efficiently computed in parallel, enabling
nearly the same training speed as regular training. As a result, we achieve
very competitive performance on CIFAR and remarkably better performance on
ImageNet (e.g. $\mathbf{ +1.1\%}$) compared with SAM, but always require half
of the training time. The code is released at https://github.com/nblt/RWP.
Related papers
- Fast Graph Sharpness-Aware Minimization for Enhancing and Accelerating Few-Shot Node Classification [53.727688136434345]
Graph Neural Networks (GNNs) have shown superior performance in node classification.
We present Fast Graph Sharpness-Aware Minimization (FGSAM) that integrates the rapid training of Multi-Layer Perceptrons with the superior performance of GNNs.
Our proposed algorithm outperforms the standard SAM with lower computational costs in FSNC tasks.
arXiv Detail & Related papers (2024-10-22T09:33:29Z) - Efficient Sharpness-Aware Minimization for Molecular Graph Transformer Models [42.59948316941217]
Sharpness-aware minimization (SAM) has received increasing attention in computer vision since it can effectively eliminate the sharp local minima from the training trajectory and generalization degradation.
We propose a new algorithm named GraphSAM, which reduces the training cost of SAM and improves the generalization performance of graph transformer models.
arXiv Detail & Related papers (2024-06-19T01:03:23Z) - CR-SAM: Curvature Regularized Sharpness-Aware Minimization [8.248964912483912]
Sharpness-Aware Minimization (SAM) aims to enhance the generalizability by minimizing worst-case loss using one-step gradient ascent as an approximation.
In this paper, we introduce a normalized Hessian trace to accurately measure the curvature of loss landscape on em both training and test sets.
In particular, to counter excessive non-linearity of loss landscape, we propose Curvature Regularized SAM (CR-SAM)
arXiv Detail & Related papers (2023-12-21T03:46:29Z) - Systematic Investigation of Sparse Perturbed Sharpness-Aware
Minimization Optimizer [158.2634766682187]
Deep neural networks often suffer from poor generalization due to complex and non- unstructured loss landscapes.
SharpnessAware Minimization (SAM) is a popular solution that smooths the loss by minimizing the change of landscape when adding a perturbation.
In this paper, we propose Sparse SAM (SSAM), an efficient and effective training scheme that achieves perturbation by a binary mask.
arXiv Detail & Related papers (2023-06-30T09:33:41Z) - Make Sharpness-Aware Minimization Stronger: A Sparsified Perturbation
Approach [132.37966970098645]
One of the popular solutions is Sharpness-Aware Minimization (SAM), which minimizes the change of weight loss when adding a perturbation.
In this paper, we propose an efficient effective training scheme coined as Sparse SAM (SSAM), which achieves double overhead of common perturbations.
In addition, we theoretically prove that S can converge at the same SAM, i.e., $O(log T/sqrtTTTTTTTTTTTTTTTTT
arXiv Detail & Related papers (2022-10-11T06:30:10Z) - Sharpness-Aware Training for Free [163.1248341911413]
SharpnessAware Minimization (SAM) has shown that minimizing a sharpness measure, which reflects the geometry of the loss landscape, can significantly reduce the generalization error.
Sharpness-Aware Training Free (SAF) mitigates the sharp landscape at almost zero computational cost over the base.
SAF ensures the convergence to a flat minimum with improved capabilities.
arXiv Detail & Related papers (2022-05-27T16:32:43Z) - Joint inference and input optimization in equilibrium networks [68.63726855991052]
deep equilibrium model is a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer.
We show that there is a natural synergy between these two settings.
We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
arXiv Detail & Related papers (2021-11-25T19:59:33Z) - Efficient Sharpness-aware Minimization for Improved Training of Neural
Networks [146.2011175973769]
This paper proposes Efficient Sharpness Aware Minimizer (M) which boosts SAM s efficiency at no cost to its generalization performance.
M includes two novel and efficient training strategies-StochasticWeight Perturbation and Sharpness-Sensitive Data Selection.
We show, via extensive experiments on the CIFAR and ImageNet datasets, that ESAM enhances the efficiency over SAM from requiring 100% extra computations to 40% vis-a-vis bases.
arXiv Detail & Related papers (2021-10-07T02:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.