Whitening for Self-Supervised Representation Learning
- URL: http://arxiv.org/abs/2007.06346v5
- Date: Fri, 14 May 2021 15:10:06 GMT
- Title: Whitening for Self-Supervised Representation Learning
- Authors: Aleksandr Ermolov, Aliaksandr Siarohin, Enver Sangineto, Nicu Sebe
- Abstract summary: We propose a new loss function for self-supervised representation learning (SSL) based on the whitening of latent-space features.
Our solution does not require asymmetric networks and it is conceptually simple.
- Score: 129.57407186848917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the current self-supervised representation learning (SSL) methods are
based on the contrastive loss and the instance-discrimination task, where
augmented versions of the same image instance ("positives") are contrasted with
instances extracted from other images ("negatives"). For the learning to be
effective, many negatives should be compared with a positive pair, which is
computationally demanding. In this paper, we propose a different direction and
a new loss function for SSL, which is based on the whitening of the
latent-space features. The whitening operation has a "scattering" effect on the
batch samples, avoiding degenerate solutions where all the sample
representations collapse to a single point. Our solution does not require
asymmetric networks and it is conceptually simple. Moreover, since negatives
are not needed, we can extract multiple positive pairs from the same image
instance. The source code of the method and of all the experiments is available
at: https://github.com/htdt/self-supervised.
Related papers
- Whitening-based Contrastive Learning of Sentence Embeddings [61.38955786965527]
This paper presents a whitening-based contrastive learning method for sentence embedding learning (WhitenedCSE)
We find that these two approaches are not totally redundant but actually have some complementarity due to different uniformity mechanism.
arXiv Detail & Related papers (2023-05-28T14:58:10Z) - Soft Neighbors are Positive Supporters in Contrastive Visual
Representation Learning [35.53729744330751]
Contrastive learning methods train visual encoders by comparing views from one instance to others.
This binary instance discrimination is studied extensively to improve feature representations in self-supervised learning.
In this paper, we rethink the instance discrimination framework and find the binary instance labeling insufficient to measure correlations between different samples.
arXiv Detail & Related papers (2023-03-30T04:22:07Z) - An Investigation into Whitening Loss for Self-supervised Learning [62.157102463386394]
A desirable objective in self-supervised learning (SSL) is to avoid feature collapse.
We propose a framework with an informative indicator to analyze whitening loss.
Based on our analysis, we propose channel whitening with random group partition (CW-RGP)
arXiv Detail & Related papers (2022-10-07T14:43:29Z) - Non-contrastive representation learning for intervals from well logs [58.70164460091879]
The representation learning problem in the oil & gas industry aims to construct a model that provides a representation based on logging data for a well interval.
One of the possible approaches is self-supervised learning (SSL)
We are the first to introduce non-contrastive SSL for well-logging data.
arXiv Detail & Related papers (2022-09-28T13:27:10Z) - Siamese Prototypical Contrastive Learning [24.794022951873156]
Contrastive Self-supervised Learning (CSL) is a practical solution that learns meaningful visual representations from massive data in an unsupervised approach.
In this paper, we tackle this problem by introducing a simple but effective contrastive learning framework.
The key insight is to employ siamese-style metric loss to match intra-prototype features, while increasing the distance between inter-prototype features.
arXiv Detail & Related papers (2022-08-18T13:25:30Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - Understanding self-supervised Learning Dynamics without Contrastive
Pairs [72.1743263777693]
Contrastive approaches to self-supervised learning (SSL) learn representations by minimizing the distance between two augmented views of the same data point.
BYOL and SimSiam, show remarkable performance it without negative pairs.
We study the nonlinear learning dynamics of non-contrastive SSL in simple linear networks.
arXiv Detail & Related papers (2021-02-12T22:57:28Z) - SCE: Scalable Network Embedding from Sparsest Cut [20.08464038805681]
Large-scale network embedding is to learn a latent representation for each node in an unsupervised manner.
A key of success to such contrastive learning methods is how to draw positive and negative samples.
In this paper, we propose SCE for unsupervised network embedding only using negative samples for training.
arXiv Detail & Related papers (2020-06-30T03:18:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.