InsCLR: Improving Instance Retrieval with Self-Supervision
- URL: http://arxiv.org/abs/2112.01390v1
- Date: Thu, 2 Dec 2021 16:21:27 GMT
- Title: InsCLR: Improving Instance Retrieval with Self-Supervision
- Authors: Zelu Deng, Yujie Zhong, Sheng Guo, Weilin Huang
- Abstract summary: We find that fine-tuning using the recently developed self-supervised (SSL) learning methods, such as SimCLR and MoCo, fails to improve the performance of instance retrieval.
To overcome this problem, we propose InsCLR, a new SSL method that builds on the textitinstance-level contrast.
InsCLR achieves similar or even better performance than the state-of-the-art SSL methods on instance retrieval.
- Score: 30.36455490844235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work aims at improving instance retrieval with self-supervision. We find
that fine-tuning using the recently developed self-supervised (SSL) learning
methods, such as SimCLR and MoCo, fails to improve the performance of instance
retrieval. In this work, we identify that the learnt representations for
instance retrieval should be invariant to large variations in viewpoint and
background etc., whereas self-augmented positives applied by the current SSL
methods can not provide strong enough signals for learning robust
instance-level representations. To overcome this problem, we propose InsCLR, a
new SSL method that builds on the \textit{instance-level} contrast, to learn
the intra-class invariance by dynamically mining meaningful pseudo positive
samples from both mini-batches and a memory bank during training. Extensive
experiments demonstrate that InsCLR achieves similar or even better performance
than the state-of-the-art SSL methods on instance retrieval. Code is available
at https://github.com/zeludeng/insclr.
Related papers
- BECLR: Batch Enhanced Contrastive Few-Shot Learning [1.450405446885067]
Unsupervised few-shot learning aspires to bridge this gap by discarding the reliance on annotations at training time.
We propose a novel Dynamic Clustered mEmory (DyCE) module to promote a highly separable latent representation space.
We then tackle the, somehow overlooked yet critical, issue of sample bias at the few-shot inference stage.
arXiv Detail & Related papers (2024-02-04T10:52:43Z) - Semantic Positive Pairs for Enhancing Visual Representation Learning of Instance Discrimination methods [4.680881326162484]
Self-supervised learning algorithms (SSL) based on instance discrimination have shown promising results.
We propose an approach to identify those images with similar semantic content and treat them as positive instances.
We run experiments on three benchmark datasets: ImageNet, STL-10 and CIFAR-10 with different instance discrimination SSL approaches.
arXiv Detail & Related papers (2023-06-28T11:47:08Z) - Alleviating Over-smoothing for Unsupervised Sentence Representation [96.19497378628594]
We present a Simple method named Self-Contrastive Learning (SSCL) to alleviate this issue.
Our proposed method is quite simple and can be easily extended to various state-of-the-art models for performance boosting.
arXiv Detail & Related papers (2023-05-09T11:00:02Z) - Improving Self-Supervised Learning by Characterizing Idealized
Representations [155.1457170539049]
We prove necessary and sufficient conditions for any task invariant to given data augmentations.
For contrastive learning, our framework prescribes simple but significant improvements to previous methods.
For non-contrastive learning, we use our framework to derive a simple and novel objective.
arXiv Detail & Related papers (2022-09-13T18:01:03Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - Improving Self-supervised Learning with Hardness-aware Dynamic
Curriculum Learning: An Application to Digital Pathology [2.2742357407157847]
Self-supervised learning (SSL) has recently shown tremendous potential to learn generic visual representations useful for many image analysis tasks.
The existing SSL methods fail to generalize to downstream tasks when the number of labeled training instances is small or if the domain shift between the transfer domains is significant.
This paper attempts to improve self-supervised pretrained representations through the lens of curriculum learning.
arXiv Detail & Related papers (2021-08-16T15:44:48Z) - ReSSL: Relational Self-Supervised Learning with Weak Augmentation [68.47096022526927]
Self-supervised learning has achieved great success in learning visual representations without data annotations.
We introduce a novel relational SSL paradigm that learns representations by modeling the relationship between different instances.
Our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency.
arXiv Detail & Related papers (2021-07-20T06:53:07Z) - On Data-Augmentation and Consistency-Based Semi-Supervised Learning [77.57285768500225]
Recently proposed consistency-based Semi-Supervised Learning (SSL) methods have advanced the state of the art in several SSL tasks.
Despite these advances, the understanding of these methods is still relatively limited.
arXiv Detail & Related papers (2021-01-18T10:12:31Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.