Similarity Contrastive Estimation for Self-Supervised Soft Contrastive
Learning
- URL: http://arxiv.org/abs/2111.14585v1
- Date: Mon, 29 Nov 2021 15:19:15 GMT
- Title: Similarity Contrastive Estimation for Self-Supervised Soft Contrastive
Learning
- Authors: Julien Denize, Jaonary Rabarisoa, Astrid Orcesi, Romain H\'erault,
St\'ephane Canu
- Abstract summary: We argue that a good data representation contains the relations, or semantic similarity, between the instances.
We propose a novel formulation of contrastive learning using semantic similarity between instances called Similarity Contrastive Estimation (SCE)
Our training objective can be considered as soft contrastive learning.
- Score: 0.41998444721319206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive representation learning has proven to be an effective
self-supervised learning method. Most successful approaches are based on the
Noise Contrastive Estimation (NCE) paradigm and consider different views of an
instance as positives and other instances as noise that positives should be
contrasted with. However, all instances in a dataset are drawn from the same
distribution and share underlying semantic information that should not be
considered as noise. We argue that a good data representation contains the
relations, or semantic similarity, between the instances. Contrastive learning
implicitly learns relations but considers the negatives as noise which is
harmful to the quality of the learned relations and therefore the quality of
the representation. To circumvent this issue we propose a novel formulation of
contrastive learning using semantic similarity between instances called
Similarity Contrastive Estimation (SCE). Our training objective can be
considered as soft contrastive learning. Instead of hard classifying positives
and negatives, we propose a continuous distribution to push or pull instances
based on their semantic similarities. The target similarity distribution is
computed from weak augmented instances and sharpened to eliminate irrelevant
relations. Each weak augmented instance is paired with a strong augmented
instance that contrasts its positive while maintaining the target similarity
distribution. Experimental results show that our proposed SCE outperforms its
baselines MoCov2 and ReSSL on various datasets and is competitive with
state-of-the-art algorithms on the ImageNet linear evaluation protocol.
Related papers
- DenoSent: A Denoising Objective for Self-Supervised Sentence
Representation Learning [59.4644086610381]
We propose a novel denoising objective that inherits from another perspective, i.e., the intra-sentence perspective.
By introducing both discrete and continuous noise, we generate noisy sentences and then train our model to restore them to their original form.
Our empirical evaluations demonstrate that this approach delivers competitive results on both semantic textual similarity (STS) and a wide range of transfer tasks.
arXiv Detail & Related papers (2024-01-24T17:48:45Z) - Soft Neighbors are Positive Supporters in Contrastive Visual
Representation Learning [35.53729744330751]
Contrastive learning methods train visual encoders by comparing views from one instance to others.
This binary instance discrimination is studied extensively to improve feature representations in self-supervised learning.
In this paper, we rethink the instance discrimination framework and find the binary instance labeling insufficient to measure correlations between different samples.
arXiv Detail & Related papers (2023-03-30T04:22:07Z) - Similarity Contrastive Estimation for Image and Video Soft Contrastive
Self-Supervised Learning [0.22940141855172028]
We propose a novel formulation of contrastive learning using semantic similarity between instances.
Our training objective is a soft contrastive one that brings the positives closer and estimates a continuous distribution to push or pull negative instances.
We show that SCE reaches state-of-the-art results for pretraining video representation and that the learned representation can generalize to video downstream tasks.
arXiv Detail & Related papers (2022-12-21T16:56:55Z) - Beyond Instance Discrimination: Relation-aware Contrastive
Self-supervised Learning [75.46664770669949]
We present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations.
Our ReCo consistently gains remarkable performance improvements.
arXiv Detail & Related papers (2022-11-02T03:25:28Z) - Non-contrastive representation learning for intervals from well logs [58.70164460091879]
The representation learning problem in the oil & gas industry aims to construct a model that provides a representation based on logging data for a well interval.
One of the possible approaches is self-supervised learning (SSL)
We are the first to introduce non-contrastive SSL for well-logging data.
arXiv Detail & Related papers (2022-09-28T13:27:10Z) - Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype
Contrast [34.58856143210749]
We present an approach to learn voice-face representations from the talking face videos, without any identity labels.
Previous works employ cross-modal instance discrimination tasks to establish the correlation of voice and face.
We propose the cross-modal prototype contrastive learning (CMPC), which takes advantage of contrastive methods and resists adverse effects of false negatives and deviate positives.
arXiv Detail & Related papers (2022-04-28T07:28:56Z) - SNCSE: Contrastive Learning for Unsupervised Sentence Embedding with
Soft Negative Samples [36.08601841321196]
We propose contrastive learning for unsupervised sentence embedding with soft negative samples.
We show that SNCSE can obtain state-of-the-art performance on semantic textual similarity task.
arXiv Detail & Related papers (2022-01-16T06:15:43Z) - Robust Contrastive Learning against Noisy Views [79.71880076439297]
We propose a new contrastive loss function that is robust against noisy views.
We show that our approach provides consistent improvements over the state-of-the-art image, video, and graph contrastive learning benchmarks.
arXiv Detail & Related papers (2022-01-12T05:24:29Z) - A Theory-Driven Self-Labeling Refinement Method for Contrastive
Representation Learning [111.05365744744437]
Unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives.
In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination.
Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning.
arXiv Detail & Related papers (2021-06-28T14:24:52Z) - Unsupervised Feature Learning by Cross-Level Instance-Group
Discrimination [68.83098015578874]
We integrate between-instance similarity into contrastive learning, not directly by instance grouping, but by cross-level discrimination.
CLD effectively brings unsupervised learning closer to natural data and real-world applications.
New state-of-the-art on self-supervision, semi-supervision, and transfer learning benchmarks, and beats MoCo v2 and SimCLR on every reported performance.
arXiv Detail & Related papers (2020-08-09T21:13:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.