Beyond Instance Discrimination: Relation-aware Contrastive
Self-supervised Learning
- URL: http://arxiv.org/abs/2211.01796v1
- Date: Wed, 2 Nov 2022 03:25:28 GMT
- Title: Beyond Instance Discrimination: Relation-aware Contrastive
Self-supervised Learning
- Authors: Yifei Zhang, Chang Liu, Yu Zhou, Weiping Wang, Qixiang Ye, Xiangyang
Ji
- Abstract summary: We present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations.
Our ReCo consistently gains remarkable performance improvements.
- Score: 75.46664770669949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive self-supervised learning (CSL) based on instance discrimination
typically attracts positive samples while repelling negatives to learn
representations with pre-defined binary self-supervision. However, vanilla CSL
is inadequate in modeling sophisticated instance relations, limiting the
learned model to retain fine semantic structure. On the one hand, samples with
the same semantic category are inevitably pushed away as negatives. On the
other hand, differences among samples cannot be captured. In this paper, we
present relation-aware contrastive self-supervised learning (ReCo) to integrate
instance relations, i.e., global distribution relation and local interpolation
relation, into the CSL framework in a plug-and-play fashion. Specifically, we
align similarity distributions calculated between the positive anchor views and
the negatives at the global level to exploit diverse similarity relations among
instances. Local-level interpolation consistency between the pixel space and
the feature space is applied to quantitatively model the feature differences of
samples with distinct apparent similarities. Through explicitly instance
relation modeling, our ReCo avoids irrationally pushing away semantically
identical samples and carves a well-structured feature space. Extensive
experiments conducted on commonly used benchmarks justify that our ReCo
consistently gains remarkable performance improvements.
Related papers
- Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Cluster-aware Contrastive Learning for Unsupervised Out-of-distribution
Detection [0.0]
Unsupervised out-of-distribution (OOD) Detection aims to separate the samples falling outside the distribution of training data without label information.
We propose Cluster-aware Contrastive Learning (CCL) framework for unsupervised OOD detection, which considers both instance-level and semantic-level information.
arXiv Detail & Related papers (2023-02-06T07:21:03Z) - Extending Momentum Contrast with Cross Similarity Consistency
Regularization [5.085461418671174]
We present Extended Momentum Contrast, a self-supervised representation learning method founded upon the legacy of the momentum-encoder unit proposed in the MoCo family configurations.
Under the cross consistency regularization rule, we argue that semantic representations associated with any pair of images (positive or negative) should preserve their cross-similarity.
We report a competitive performance on the standard Imagenet-1K linear head classification benchmark.
arXiv Detail & Related papers (2022-06-07T20:06:56Z) - Similarity Contrastive Estimation for Self-Supervised Soft Contrastive
Learning [0.41998444721319206]
We argue that a good data representation contains the relations, or semantic similarity, between the instances.
We propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE)
Our training objective can be considered as soft contrastive learning.
arXiv Detail & Related papers (2021-11-29T15:19:15Z) - Instance Similarity Learning for Unsupervised Feature Representation [83.31011038813459]
We propose an instance similarity learning (ISL) method for unsupervised feature representation.
We employ the Generative Adversarial Networks (GAN) to mine the underlying feature manifold.
Experiments on image classification demonstrate the superiority of our method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-05T16:42:06Z) - ReSSL: Relational Self-Supervised Learning with Weak Augmentation [68.47096022526927]
Self-supervised learning has achieved great success in learning visual representations without data annotations.
We introduce a novel relational SSL paradigm that learns representations by modeling the relationship between different instances.
Our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency.
arXiv Detail & Related papers (2021-07-20T06:53:07Z) - Exploring Instance Relations for Unsupervised Feature Embedding [12.882929865091423]
In this paper, we explore instance relations including intra-instance multi-view relation and inter-instance relation for unsupervised feature embedding.
The proposed approach, referred to as EIR, is simple-yet-effective and can be easily inserted into existing view-invariant contrastive based methods.
arXiv Detail & Related papers (2021-05-07T15:47:53Z) - Unsupervised Feature Learning by Cross-Level Instance-Group
Discrimination [68.83098015578874]
We integrate between-instance similarity into contrastive learning, not directly by instance grouping, but by cross-level discrimination.
CLD effectively brings unsupervised learning closer to natural data and real-world applications.
New state-of-the-art on self-supervision, semi-supervision, and transfer learning benchmarks, and beats MoCo v2 and SimCLR on every reported performance.
arXiv Detail & Related papers (2020-08-09T21:13:13Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.