Neural networks trained with SGD learn distributions of increasing
complexity
- URL: http://arxiv.org/abs/2211.11567v2
- Date: Fri, 26 May 2023 13:11:17 GMT
- Title: Neural networks trained with SGD learn distributions of increasing
complexity
- Authors: Maria Refinetti and Alessandro Ingrosso and Sebastian Goldt
- Abstract summary: We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
- Score: 78.30235086565388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability of deep neural networks to generalise well even when they
interpolate their training data has been explained using various "simplicity
biases". These theories postulate that neural networks avoid overfitting by
first learning simple functions, say a linear classifier, before learning more
complex, non-linear functions. Meanwhile, data structure is also recognised as
a key ingredient for good generalisation, yet its role in simplicity biases is
not yet understood. Here, we show that neural networks trained using stochastic
gradient descent initially classify their inputs using lower-order input
statistics, like mean and covariance, and exploit higher-order statistics only
later during training. We first demonstrate this distributional simplicity bias
(DSB) in a solvable model of a neural network trained on synthetic data. We
empirically demonstrate DSB in a range of deep convolutional networks and
visual transformers trained on CIFAR10, and show that it even holds in networks
pre-trained on ImageNet. We discuss the relation of DSB to other simplicity
biases and consider its implications for the principle of Gaussian universality
in learning.
Related papers
- Early learning of the optimal constant solution in neural networks and humans [4.016584525313835]
We show that learning of a target function is preceded by an early phase in which networks learn the optimal constant solution (OCS)
We show that learning of the OCS can emerge even in the absence of bias terms and is equivalently driven by generic correlations in the input data.
Our work suggests the OCS as a universal learning principle in supervised, error-corrective learning.
arXiv Detail & Related papers (2024-06-25T11:12:52Z) - Simplicity Bias of Two-Layer Networks beyond Linearly Separable Data [4.14360329494344]
We characterize simplicity bias for general datasets in the context of two-layer neural networks with small weights and trained with gradient flow.
For datasets with an XOR-like pattern, we precisely identify the learned features and demonstrate that simplicity bias intensifies during later training stages.
These results indicate that features learned in the middle stages of training may be more useful for OOD transfer.
arXiv Detail & Related papers (2024-05-27T16:00:45Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Reconstructing Training Data from Trained Neural Networks [42.60217236418818]
We show in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.
We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.
arXiv Detail & Related papers (2022-06-15T18:35:16Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Redundant representations help generalization in wide neural networks [71.38860635025907]
We study the last hidden layer representations of various state-of-the-art convolutional neural networks.
We find that if the last hidden representation is wide enough, its neurons tend to split into groups that carry identical information, and differ from each other only by statistically independent noise.
arXiv Detail & Related papers (2021-06-07T10:18:54Z) - How Neural Networks Extrapolate: From Feedforward to Graph Neural
Networks [80.55378250013496]
We study how neural networks trained by gradient descent extrapolate what they learn outside the support of the training distribution.
Graph Neural Networks (GNNs) have shown some success in more complex tasks.
arXiv Detail & Related papers (2020-09-24T17:48:59Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - The Surprising Simplicity of the Early-Time Learning Dynamics of Neural
Networks [43.860358308049044]
In work, we show that these common perceptions can be completely false in the early phase of learning.
We argue that this surprising simplicity can persist in networks with more layers with convolutional architecture.
arXiv Detail & Related papers (2020-06-25T17:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.