ReSSL: Relational Self-Supervised Learning with Weak Augmentation
- URL: http://arxiv.org/abs/2107.09282v1
- Date: Tue, 20 Jul 2021 06:53:07 GMT
- Title: ReSSL: Relational Self-Supervised Learning with Weak Augmentation
- Authors: Mingkai Zheng, Shan You, Fei Wang, Chen Qian, Changshui Zhang,
Xiaogang Wang, Chang Xu
- Abstract summary: Self-supervised learning has achieved great success in learning visual representations without data annotations.
We introduce a novel relational SSL paradigm that learns representations by modeling the relationship between different instances.
Our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency.
- Score: 68.47096022526927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised Learning (SSL) including the mainstream contrastive learning
has achieved great success in learning visual representations without data
annotations. However, most of methods mainly focus on the instance level
information (\ie, the different augmented images of the same instance should
have the same feature or cluster into the same class), but there is a lack of
attention on the relationships between different instances. In this paper, we
introduced a novel SSL paradigm, which we term as relational self-supervised
learning (ReSSL) framework that learns representations by modeling the
relationship between different instances. Specifically, our proposed method
employs sharpened distribution of pairwise similarities among different
instances as \textit{relation} metric, which is thus utilized to match the
feature embeddings of different augmentations. Moreover, to boost the
performance, we argue that weak augmentations matter to represent a more
reliable relation, and leverage momentum strategy for practical efficiency.
Experimental results show that our proposed ReSSL significantly outperforms the
previous state-of-the-art algorithms in terms of both performance and training
efficiency. Code is available at \url{https://github.com/KyleZheng1997/ReSSL}.
Related papers
- Augmentations vs Algorithms: What Works in Self-Supervised Learning [9.194402355758164]
We study the relative effects of data augmentations, pretraining algorithms, and model architectures in Self-Supervised Learning (SSL)
We propose a new framework which unifies many seemingly disparate SSL methods into a single shared template.
arXiv Detail & Related papers (2024-03-08T23:42:06Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Enlarging Instance-specific and Class-specific Information for Open-set
Action Recognition [47.69171542776917]
We find that features with richer semantic diversity can significantly improve the open-set performance under the same uncertainty scores.
A novel Prototypical Similarity Learning (PSL) framework is proposed to keep the instance variance within the same class to retain more IS information.
arXiv Detail & Related papers (2023-03-25T04:07:36Z) - Beyond Instance Discrimination: Relation-aware Contrastive
Self-supervised Learning [75.46664770669949]
We present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations.
Our ReCo consistently gains remarkable performance improvements.
arXiv Detail & Related papers (2022-11-02T03:25:28Z) - On Higher Adversarial Susceptibility of Contrastive Self-Supervised
Learning [104.00264962878956]
Contrastive self-supervised learning (CSL) has managed to match or surpass the performance of supervised learning in image and video classification.
It is still largely unknown if the nature of the representation induced by the two learning paradigms is similar.
We identify the uniform distribution of data representation over a unit hypersphere in the CSL representation space as the key contributor to this phenomenon.
We devise strategies that are simple, yet effective in improving model robustness with CSL training.
arXiv Detail & Related papers (2022-07-22T03:49:50Z) - Weak Augmentation Guided Relational Self-Supervised Learning [80.0680103295137]
We introduce a novel relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances.
Our proposed method employs sharpened distribution of pairwise similarities among different instances as textitrelation metric.
Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures.
arXiv Detail & Related papers (2022-03-16T16:14:19Z) - Memory-Augmented Relation Network for Few-Shot Learning [114.47866281436829]
In this work, we investigate a new metric-learning method, Memory-Augmented Relation Network (MRN)
In MRN, we choose the samples that are visually similar from the working context, and perform weighted information propagation to attentively aggregate helpful information from chosen ones to enhance its representation.
We empirically demonstrate that MRN yields significant improvement over its ancestor and achieves competitive or even better performance when compared with other few-shot learning approaches.
arXiv Detail & Related papers (2020-05-09T10:09:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.