Learning sparse features can lead to overfitting in neural networks
- URL: http://arxiv.org/abs/2206.12314v1
- Date: Fri, 24 Jun 2022 14:26:33 GMT
- Title: Learning sparse features can lead to overfitting in neural networks
- Authors: Leonardo Petrini, Francesco Cagnetta, Eric Vanden-Eijnden, Matthieu
Wyart
- Abstract summary: We show that feature learning can perform worse than lazy training.
Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth.
- Score: 9.2104922520782
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is widely believed that the success of deep networks lies in their ability
to learn a meaningful representation of the features of the data. Yet,
understanding when and how this feature learning improves performance remains a
challenge: for example, it is beneficial for modern architectures trained to
classify images, whereas it is detrimental for fully-connected networks trained
for the same task on the same data. Here we propose an explanation for this
puzzle, by showing that feature learning can perform worse than lazy training
(via random feature kernel or the NTK) as the former can lead to a sparser
neural representation. Although sparsity is known to be essential for learning
anisotropic data, it is detrimental when the target function is constant or
smooth along certain directions of input space. We illustrate this phenomenon
in two settings: (i) regression of Gaussian random functions on the
d-dimensional unit sphere and (ii) classification of benchmark datasets of
images. For (i), we compute the scaling of the generalization error with number
of training points, and show that methods that do not learn features generalize
better, even when the dimension of the input space is large. For (ii), we show
empirically that learning features can indeed lead to sparse and thereby less
smooth representations of the image predictors. This fact is plausibly
responsible for deteriorating the performance, which is known to be correlated
with smoothness along diffeomorphisms.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - How deep convolutional neural networks lose spatial information with
training [0.7328100870402177]
We show how stability to image diffeomorphisms is achieved by spatial pooling in the first half of the net, and by channel pooling in the second half.
We find that the increased sensitivity to noise is due to the perturbing noise piling up during pooling, after being rectified by ReLU units.
arXiv Detail & Related papers (2022-10-04T10:21:03Z) - A Theoretical Analysis on Feature Learning in Neural Networks: Emergence
from Inputs and Advantage over Fixed Features [18.321479102352875]
An important characteristic of neural networks is their ability to learn representations of the input data with effective features for prediction.
We consider learning problems motivated by practical data, where the labels are determined by a set of class relevant patterns and the inputs are generated from these.
We prove that neural networks trained by gradient descent can succeed on these problems.
arXiv Detail & Related papers (2022-06-03T17:49:38Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Fair Interpretable Learning via Correction Vectors [68.29997072804537]
We propose a new framework for fair representation learning centered around the learning of "correction vectors"
The corrections are then simply summed up to the original features, and can therefore be analyzed as an explicit penalty or bonus to each feature.
We show experimentally that a fair representation learning problem constrained in such a way does not impact performance.
arXiv Detail & Related papers (2022-01-17T10:59:33Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Toward Understanding the Feature Learning Process of Self-supervised
Contrastive Learning [43.504548777955854]
We study how contrastive learning learns the feature representations for neural networks by analyzing its feature learning process.
We prove that contrastive learning using textbfReLU networks provably learns the desired sparse features if proper augmentations are adopted.
arXiv Detail & Related papers (2021-05-31T16:42:09Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Malicious Network Traffic Detection via Deep Learning: An Information
Theoretic View [0.0]
We study how homeomorphism affects learned representation of a malware traffic dataset.
Our results suggest that although the details of learned representations and the specific coordinate system defined over the manifold of all parameters differ slightly, the functional approximations are the same.
arXiv Detail & Related papers (2020-09-16T15:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.