ISD: Self-Supervised Learning by Iterative Similarity Distillation
- URL: http://arxiv.org/abs/2012.09259v1
- Date: Wed, 16 Dec 2020 20:50:17 GMT
- Title: ISD: Self-Supervised Learning by Iterative Similarity Distillation
- Authors: Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Vipin Pillai, Paolo
Favaro, and Hamed Pirsiavash
- Abstract summary: We introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs.
Our method achieves better results compared to state-of-the-art models like BYOL and MoCo on transfer learning settings.
- Score: 39.60300771234578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, contrastive learning has achieved great results in self-supervised
learning, where the main idea is to push two augmentations of an image
(positive pairs) closer compared to other random images (negative pairs). We
argue that not all random images are equal. Hence, we introduce a self
supervised learning algorithm where we use a soft similarity for the negative
images rather than a binary distinction between positive and negative pairs. We
iteratively distill a slowly evolving teacher model to the student model by
capturing the similarity of a query image to some random images and
transferring that knowledge to the student. We argue that our method is less
constrained compared to recent contrastive learning methods, so it can learn
better features. Specifically, our method should handle unbalanced and
unlabeled data better than existing contrastive learning methods, because the
randomly chosen negative set might include many samples that are semantically
similar to the query image. In this case, our method labels them as highly
similar while standard contrastive methods label them as negative pairs. Our
method achieves better results compared to state-of-the-art models like BYOL
and MoCo on transfer learning settings. We also show that our method performs
better in the settings where the unlabeled data is unbalanced. Our code is
available here: https://github.com/UMBCvision/ISD.
Related papers
- LeOCLR: Leveraging Original Images for Contrastive Learning of Visual Representations [4.680881326162484]
Contrastive instance discrimination methods outperform supervised learning in downstream tasks such as image classification and object detection.
A common augmentation technique in contrastive learning is random cropping followed by resizing.
We introduce LeOCLR, a framework that employs a novel instance discrimination approach and an adapted loss function.
arXiv Detail & Related papers (2024-03-11T15:33:32Z) - Similarity Contrastive Estimation for Image and Video Soft Contrastive
Self-Supervised Learning [0.22940141855172028]
We propose a novel formulation of contrastive learning using semantic similarity between instances.
Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances.
We show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks.
arXiv Detail & Related papers (2022-12-21T16:56:55Z) - Non-contrastive representation learning for intervals from well logs [58.70164460091879]
The representation learning problem in the oil & gas industry aims to construct a model that provides a representation based on logging data for a well interval.
One of the possible approaches is self-supervised learning (SSL)
We are the first to introduce non-contrastive SSL for well-logging data.
arXiv Detail & Related papers (2022-09-28T13:27:10Z) - Exploring Negatives in Contrastive Learning for Unpaired Image-to-Image
Translation [12.754320302262533]
We introduce a new negative Pruning technology for Unpaired image-to-image Translation (PUT) by sparsifying and ranking the patches.
The proposed algorithm is efficient, flexible and enables the model to learn essential information between corresponding patches stably.
arXiv Detail & Related papers (2022-04-23T08:31:18Z) - With a Little Help from My Friends: Nearest-Neighbor Contrastive
Learning of Visual Representations [87.72779294717267]
Using the nearest-neighbor as positive in contrastive losses improves performance significantly on ImageNet classification.
We demonstrate empirically that our method is less reliant on complex data augmentations.
arXiv Detail & Related papers (2021-04-29T17:56:08Z) - Boosting Contrastive Self-Supervised Learning with False Negative
Cancellation [40.71224235172881]
A fundamental problem in contrastive learning is mitigating the effects of false negatives.
We propose novel approaches to identify false negatives, as well as two strategies to mitigate their effect.
Our method exhibits consistent improvements over existing contrastive learning-based methods.
arXiv Detail & Related papers (2020-11-23T22:17:21Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z) - Delving into Inter-Image Invariance for Unsupervised Visual
Representations [108.33534231219464]
We present a study to better understand the role of inter-image invariance learning.
Online labels converge faster than offline labels.
Semi-hard negative samples are more reliable and unbiased than hard negative samples.
arXiv Detail & Related papers (2020-08-26T17:44:23Z) - Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation
Learning [108.999497144296]
Recently advanced unsupervised learning approaches use the siamese-like framework to compare two "views" from the same image for learning representations.
This work aims to involve the distance concept on label space in the unsupervised learning and let the model be aware of the soft degree of similarity between positive or negative pairs.
Despite its conceptual simplicity, we show empirically that with the solution -- Unsupervised image mixtures (Un-Mix), we can learn subtler, more robust and generalized representations from the transformed input and corresponding new label space.
arXiv Detail & Related papers (2020-03-11T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.