Toward Understanding the Feature Learning Process of Self-supervised
Contrastive Learning
- URL: http://arxiv.org/abs/2105.15134v1
- Date: Mon, 31 May 2021 16:42:09 GMT
- Title: Toward Understanding the Feature Learning Process of Self-supervised
Contrastive Learning
- Authors: Zixin Wen, Yuanzhi Li
- Abstract summary: We study how contrastive learning learns the feature representations for neural networks by analyzing its feature learning process.
We prove that contrastive learning using textbfReLU networks provably learns the desired sparse features if proper augmentations are adopted.
- Score: 43.504548777955854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How can neural networks trained by contrastive learning extract features from
the unlabeled data? Why does contrastive learning usually need much stronger
data augmentations than supervised learning to ensure good representations?
These questions involve both the optimization and statistical aspects of deep
learning, but can hardly be answered by analyzing supervised learning, where
the target functions are the highest pursuit. Indeed, in self-supervised
learning, it is inevitable to relate to the optimization/generalization of
neural networks to how they can encode the latent structures in the data, which
we refer to as the \textit{feature learning process}.
In this work, we formally study how contrastive learning learns the feature
representations for neural networks by analyzing its feature learning process.
We consider the case where our data are comprised of two types of features: the
more semantically aligned sparse features which we want to learn from, and the
other dense features we want to avoid. Theoretically, we prove that contrastive
learning using \textbf{ReLU} networks provably learns the desired sparse
features if proper augmentations are adopted. We present an underlying
principle called \textbf{feature decoupling} to explain the effects of
augmentations, where we theoretically characterize how augmentations can reduce
the correlations of dense features between positive samples while keeping the
correlations of sparse features intact, thereby forcing the neural networks to
learn from the self-supervision of sparse features. Empirically, we verified
that the feature decoupling principle matches the underlying mechanism of
contrastive learning in practice.
Related papers
- Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron [3.069335774032178]
We use a dataset-process approach to derive flow equations describing learning.
We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve.
This approach points a way toward analyzing learning dynamics for more-complex circuit architectures.
arXiv Detail & Related papers (2024-09-05T17:58:28Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Learning sparse features can lead to overfitting in neural networks [9.2104922520782]
We show that feature learning can perform worse than lazy training.
Although sparsity is known to be essential for learning anisotropic data, it is detrimental when the target function is constant or smooth.
arXiv Detail & Related papers (2022-06-24T14:26:33Z) - A Theoretical Analysis on Feature Learning in Neural Networks: Emergence
from Inputs and Advantage over Fixed Features [18.321479102352875]
An important characteristic of neural networks is their ability to learn representations of the input data with effective features for prediction.
We consider learning problems motivated by practical data, where the labels are determined by a set of class relevant patterns and the inputs are generated from these.
We prove that neural networks trained by gradient descent can succeed on these problems.
arXiv Detail & Related papers (2022-06-03T17:49:38Z) - Feature Forgetting in Continual Representation Learning [48.89340526235304]
representations do not suffer from "catastrophic forgetting" even in plain continual learning, but little further fact is known about its characteristics.
We devise a protocol for evaluating representation in continual learning, and then use it to present an overview of the basic trends of continual representation learning.
To study the feature forgetting problem, we create a synthetic dataset to identify and visualize the prevalence of feature forgetting in neural networks.
arXiv Detail & Related papers (2022-05-26T13:38:56Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - The Connection Between Approximation, Depth Separation and Learnability
in Neural Networks [70.55686685872008]
We study the connection between learnability and approximation capacity.
We show that learnability with deep networks of a target function depends on the ability of simpler classes to approximate the target.
arXiv Detail & Related papers (2021-01-31T11:32:30Z) - Malicious Network Traffic Detection via Deep Learning: An Information
Theoretic View [0.0]
We study how homeomorphism affects learned representation of a malware traffic dataset.
Our results suggest that although the details of learned representations and the specific coordinate system defined over the manifold of all parameters differ slightly, the functional approximations are the same.
arXiv Detail & Related papers (2020-09-16T15:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.