Sharpness-Aware Minimization Leads to Low-Rank Features
- URL: http://arxiv.org/abs/2305.16292v2
- Date: Sat, 28 Oct 2023 22:02:29 GMT
- Title: Sharpness-Aware Minimization Leads to Low-Rank Features
- Authors: Maksym Andriushchenko, Dara Bahri, Hossein Mobahi, Nicolas Flammarion
- Abstract summary: Sharpness-aware minimization (SAM) is a recently proposed method that minimizes the training loss of a neural network.
We show that SAM reduces the feature rank which happens at different layers of a neural network.
We confirm this effect theoretically and check that it can also occur in deep networks.
- Score: 49.64754316927016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sharpness-aware minimization (SAM) is a recently proposed method that
minimizes the sharpness of the training loss of a neural network. While its
generalization improvement is well-known and is the primary motivation, we
uncover an additional intriguing effect of SAM: reduction of the feature rank
which happens at different layers of a neural network. We show that this
low-rank effect occurs very broadly: for different architectures such as
fully-connected networks, convolutional networks, vision transformers and for
different objectives such as regression, classification, language-image
contrastive training. To better understand this phenomenon, we provide a
mechanistic understanding of how low-rank features arise in a simple two-layer
network. We observe that a significant number of activations gets entirely
pruned by SAM which directly contributes to the rank reduction. We confirm this
effect theoretically and check that it can also occur in deep networks,
although the overall rank reduction mechanism can be more complex, especially
for deep networks with pre-activation skip connections and self-attention
layers. We make our code available at
https://github.com/tml-epfl/sam-low-rank-features.
Related papers
- Order parameters and phase transitions of continual learning in deep neural networks [6.349503549199403]
Continual learning (CL) enables animals to learn new tasks without erasing prior knowledge.
CL in artificial neural networks (NNs) is challenging due to catastrophic forgetting, where new learning degrades performance on older tasks.
We present a statistical-mechanics theory of CL in deep, wide NNs, which characterizes the network's input-output mapping as it learns a sequence of tasks.
arXiv Detail & Related papers (2024-07-14T20:22:36Z) - Why Does Sharpness-Aware Minimization Generalize Better Than SGD? [102.40907275290891]
We show why Sharpness-Aware Minimization (SAM) generalizes better than Gradient Descent (SGD) for certain data model and two-layer convolutional ReLU networks.
Our result explains the benefits of SAM, particularly its ability to prevent noise learning in the early stages, thereby facilitating more effective learning of features.
arXiv Detail & Related papers (2023-10-11T07:51:10Z) - Centered Self-Attention Layers [89.21791761168032]
The self-attention mechanism in transformers and the message-passing mechanism in graph neural networks are repeatedly applied.
We show that this application inevitably leads to oversmoothing, i.e., to similar representations at the deeper layers.
We present a correction term to the aggregating operator of these mechanisms.
arXiv Detail & Related papers (2023-06-02T15:19:08Z) - Network Degeneracy as an Indicator of Training Performance: Comparing
Finite and Infinite Width Angle Predictions [3.04585143845864]
We show that as networks get deeper and deeper, they are more susceptible to becoming degenerate.
We use a simple algorithm that can accurately predict the level of degeneracy for any given fully connected ReLU network architecture.
arXiv Detail & Related papers (2023-06-02T13:02:52Z) - A Generic Shared Attention Mechanism for Various Backbone Neural Networks [53.36677373145012]
Self-attention modules (SAMs) produce strongly correlated attention maps across different layers.
Dense-and-Implicit Attention (DIA) shares SAMs across layers and employs a long short-term memory module.
Our simple yet effective DIA can consistently enhance various network backbones.
arXiv Detail & Related papers (2022-10-27T13:24:08Z) - Improving the Trainability of Deep Neural Networks through Layerwise
Batch-Entropy Regularization [1.3999481573773072]
We introduce and evaluate the batch-entropy which quantifies the flow of information through each layer of a neural network.
We show that we can train a "vanilla" fully connected network and convolutional neural network with 500 layers by simply adding the batch-entropy regularization term to the loss function.
arXiv Detail & Related papers (2022-08-01T20:31:58Z) - Rank Diminishing in Deep Neural Networks [71.03777954670323]
Rank of neural networks measures information flowing across layers.
It is an instance of a key structural condition that applies across broad domains of machine learning.
For neural networks, however, the intrinsic mechanism that yields low-rank structures remains vague and unclear.
arXiv Detail & Related papers (2022-06-13T12:03:32Z) - HALO: Learning to Prune Neural Networks with Shrinkage [5.283963846188862]
Deep neural networks achieve state-of-the-art performance in a variety of tasks by extracting a rich set of features from unstructured data.
Modern techniques for inducing sparsity and reducing model size are (1) network pruning, (2) training with a sparsity inducing penalty, and (3) training a binary mask jointly with the weights of the network.
We present a novel penalty called Hierarchical Adaptive Lasso which learns to adaptively sparsify weights of a given network via trainable parameters.
arXiv Detail & Related papers (2020-08-24T04:08:48Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.