Constrained Mean Shift for Representation Learning
- URL: http://arxiv.org/abs/2110.10309v1
- Date: Tue, 19 Oct 2021 23:14:23 GMT
- Title: Constrained Mean Shift for Representation Learning
- Authors: Ajinkya Tejankar, Soroush Abbasi Koohpayegani, and Hamed Pirsiavash
- Abstract summary: We develop a non-contrastive representation learning method that can exploit additional knowledge.
Our main idea is to generalize the mean-shift algorithm by constraining the search space of nearest neighbors.
We show that it is possible to use the noisy constraint across modalities to train self-supervised video models.
- Score: 17.652439157554877
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We are interested in representation learning from labeled or unlabeled data.
Inspired by recent success of self-supervised learning (SSL), we develop a
non-contrastive representation learning method that can exploit additional
knowledge. This additional knowledge may come from annotated labels in the
supervised setting or an SSL model from another modality in the SSL setting.
Our main idea is to generalize the mean-shift algorithm by constraining the
search space of nearest neighbors, resulting in semantically purer
representations. Our method simply pulls the embedding of an instance closer to
its nearest neighbors in a search space that is constrained using the
additional knowledge. By leveraging this non-contrastive loss, we show that the
supervised ImageNet-1k pretraining with our method results in better transfer
performance as compared to the baselines. Further, we demonstrate that our
method is relatively robust to label noise. Finally, we show that it is
possible to use the noisy constraint across modalities to train self-supervised
video models.
Related papers
- Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation [27.609748213840138]
In this work, we explore the impact of spurious features on Self-Supervised Learning (SSL) for visual representation learning.
We show that commonly used augmentations in SSL can cause undesired invariances in the image space.
We propose LateTVG to remove spurious information from these representations during pre-training, by regularizing later layers of the encoder via pruning.
arXiv Detail & Related papers (2024-05-28T18:42:13Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Semantic Segmentation with Active Semi-Supervised Representation
Learning [23.79742108127707]
We train an effective semantic segmentation algorithm with significantly lesser labeled data.
We extend the prior state-of-the-art S4AL algorithm by replacing its mean teacher approach for semi-supervised learning with a self-training approach.
We evaluate our method on CamVid and CityScapes datasets, the de-facto standards for active learning for semantic segmentation.
arXiv Detail & Related papers (2022-10-16T00:21:43Z) - Non-contrastive representation learning for intervals from well logs [58.70164460091879]
The representation learning problem in the oil & gas industry aims to construct a model that provides a representation based on logging data for a well interval.
One of the possible approaches is self-supervised learning (SSL)
We are the first to introduce non-contrastive SSL for well-logging data.
arXiv Detail & Related papers (2022-09-28T13:27:10Z) - Deep Low-Density Separation for Semi-Supervised Classification [0.0]
We introduce a novel hybrid method that applies low-density separation to the embedded features.
Our approach effectively classifies thousands of unlabeled users from a relatively small number of hand-classified examples.
arXiv Detail & Related papers (2022-05-22T11:00:55Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Can Semantic Labels Assist Self-Supervised Visual Representation
Learning? [194.1681088693248]
We present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN)
In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods.
Our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.
arXiv Detail & Related papers (2020-11-17T13:25:00Z) - Information Bottleneck Constrained Latent Bidirectional Embedding for
Zero-Shot Learning [59.58381904522967]
We propose a novel embedding based generative model with a tight visual-semantic coupling constraint.
We learn a unified latent space that calibrates the embedded parametric distributions of both visual and semantic spaces.
Our method can be easily extended to transductive ZSL setting by generating labels for unseen images.
arXiv Detail & Related papers (2020-09-16T03:54:12Z) - Learning Invariant Representations for Reinforcement Learning without
Reconstruction [98.33235415273562]
We study how representation learning can accelerate reinforcement learning from rich observations, such as images, without relying either on domain knowledge or pixel-reconstruction.
Bisimulation metrics quantify behavioral similarity between states in continuous MDPs.
We demonstrate the effectiveness of our method at disregarding task-irrelevant information using modified visual MuJoCo tasks.
arXiv Detail & Related papers (2020-06-18T17:59:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.