Extreme Memorization via Scale of Initialization
- URL: http://arxiv.org/abs/2008.13363v2
- Date: Sat, 1 May 2021 22:09:15 GMT
- Title: Extreme Memorization via Scale of Initialization
- Authors: Harsh Mehta, Ashok Cutkosky, Behnam Neyshabur
- Abstract summary: We construct an experimental setup in which changing the scale of initialization strongly impacts the implicit regularization induced by SGD.
We find that the extent and manner in which generalization ability is affected depends on the activation and loss function used.
In the case of the homogeneous ReLU activation, we show that this behavior can be attributed to the loss function.
- Score: 72.78162454173803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We construct an experimental setup in which changing the scale of
initialization strongly impacts the implicit regularization induced by SGD,
interpolating from good generalization performance to completely memorizing the
training set while making little progress on the test set. Moreover, we find
that the extent and manner in which generalization ability is affected depends
on the activation and loss function used, with $\sin$ activation demonstrating
extreme memorization. In the case of the homogeneous ReLU activation, we show
that this behavior can be attributed to the loss function. Our empirical
investigation reveals that increasing the scale of initialization correlates
with misalignment of representations and gradients across examples in the same
class. This insight allows us to devise an alignment measure over gradients and
representations which can capture this phenomenon. We demonstrate that our
alignment measure correlates with generalization of deep models trained on
image classification tasks.
Related papers
- Is network fragmentation a useful complexity measure? [0.8480931990442769]
Deep neural network classifiers can exhibit fragmentation', where the model function rapidly changes class as the input space is traversed.
We study this phenomenon in the context of image classification and ask whether fragmentation could be predictive of generalization performance.
We report on new observations related to fragmentation, namely (i) fragmentation is not limited to the input space but occurs in the hidden representations as well, (ii) fragmentation follows the trends in the validation error throughout training, and (iii) fragmentation is not a direct result of increased weight norms.
arXiv Detail & Related papers (2024-11-07T13:27:37Z) - STAT: Towards Generalizable Temporal Action Localization [56.634561073746056]
Weakly-supervised temporal action localization (WTAL) aims to recognize and localize action instances with only video-level labels.
Existing methods suffer from severe performance degradation when transferring to different distributions.
We propose GTAL, which focuses on improving the generalizability of action localization methods.
arXiv Detail & Related papers (2024-04-20T07:56:21Z) - On the Dynamics Under the Unhinged Loss and Beyond [104.49565602940699]
We introduce the unhinged loss, a concise loss function, that offers more mathematical opportunities to analyze closed-form dynamics.
The unhinged loss allows for considering more practical techniques, such as time-vary learning rates and feature normalization.
arXiv Detail & Related papers (2023-12-13T02:11:07Z) - A Theoretical Understanding of Shallow Vision Transformers: Learning,
Generalization, and Sample Complexity [71.11795737362459]
ViTs with self-attention modules have recently achieved great empirical success in many tasks.
However, theoretical learning generalization analysis is mostly noisy and elusive.
This paper provides the first theoretical analysis of a shallow ViT for a classification task.
arXiv Detail & Related papers (2023-02-12T22:12:35Z) - Theoretical Characterization of How Neural Network Pruning Affects its
Generalization [131.1347309639727]
This work makes the first attempt to study how different pruning fractions affect the model's gradient descent dynamics and generalization.
It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero.
More surprisingly, the generalization bound gets better as the pruning fraction gets larger.
arXiv Detail & Related papers (2023-01-01T03:10:45Z) - Relating Regularization and Generalization through the Intrinsic
Dimension of Activations [11.00580615194563]
We show that common regularization techniques uniformly decrease the last-layer ID (LLID) of validation set activations for image classification models.
We also examine the LLID over the course of training of models that exhibit grokking.
arXiv Detail & Related papers (2022-11-23T19:00:00Z) - Regression as Classification: Influence of Task Formulation on Neural
Network Features [16.239708754973865]
Neural networks can be trained to solve regression problems by using gradient-based methods to minimize the square loss.
practitioners often prefer to reformulate regression as a classification problem, observing that training on the cross entropy loss results in better performance.
By focusing on two-layer ReLU networks, we explore how the implicit bias induced by gradient-based optimization could partly explain the phenomenon.
arXiv Detail & Related papers (2022-11-10T15:13:23Z) - Generalizable Information Theoretic Causal Representation [37.54158138447033]
We propose to learn causal representation from observational data by regularizing the learning procedure with mutual information measures according to our hypothetical causal graph.
The optimization involves a counterfactual loss, based on which we deduce a theoretical guarantee that the causality-inspired learning is with reduced sample complexity and better generalization ability.
arXiv Detail & Related papers (2022-02-17T00:38:35Z) - The Break-Even Point on Optimization Trajectories of Deep Neural
Networks [64.7563588124004]
We argue for the existence of the "break-even" point on this trajectory.
We show that using a large learning rate in the initial phase of training reduces the variance of the gradient.
We also show that using a low learning rate results in bad conditioning of the loss surface even for a neural network with batch normalization layers.
arXiv Detail & Related papers (2020-02-21T22:55:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.