Extending Momentum Contrast with Cross Similarity Consistency
Regularization
- URL: http://arxiv.org/abs/2206.04676v1
- Date: Tue, 7 Jun 2022 20:06:56 GMT
- Title: Extending Momentum Contrast with Cross Similarity Consistency
Regularization
- Authors: Mehdi Seyfi, Amin Banitalebi-Dehkordi, and Yong Zhang
- Abstract summary: We present Extended Momentum Contrast, a self-supervised representation learning method founded upon the legacy of the momentum-encoder unit proposed in the MoCo family configurations.
Under the cross consistency regularization rule, we argue that semantic representations associated with any pair of images (positive or negative) should preserve their cross-similarity.
We report a competitive performance on the standard Imagenet-1K linear head classification benchmark.
- Score: 5.085461418671174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contrastive self-supervised representation learning methods maximize the
similarity between the positive pairs, and at the same time tend to minimize
the similarity between the negative pairs. However, in general the interplay
between the negative pairs is ignored as they do not put in place special
mechanisms to treat negative pairs differently according to their specific
differences and similarities. In this paper, we present Extended Momentum
Contrast (XMoCo), a self-supervised representation learning method founded upon
the legacy of the momentum-encoder unit proposed in the MoCo family
configurations. To this end, we introduce a cross consistency regularization
loss, with which we extend the transformation consistency to dissimilar images
(negative pairs). Under the cross consistency regularization rule, we argue
that semantic representations associated with any pair of images (positive or
negative) should preserve their cross-similarity under pretext transformations.
Moreover, we further regularize the training loss by enforcing a uniform
distribution of similarity over the negative pairs across a batch. The proposed
regularization can easily be added to existing self-supervised learning
algorithms in a plug-and-play fashion. Empirically, we report a competitive
performance on the standard Imagenet-1K linear head classification benchmark.
In addition, by transferring the learned representations to common downstream
tasks, we show that using XMoCo with the prevalently utilized augmentations can
lead to improvements in the performance of such tasks. We hope the findings of
this paper serve as a motivation for researchers to take into consideration the
important interplay among the negative examples in self-supervised learning.
Related papers
- Unsupervised Representation Learning by Balanced Self Attention Matching [2.3020018305241337]
We present a self-supervised method for embedding image features called BAM.
We obtain rich representations and avoid feature collapse by minimizing a loss that matches these distributions to their globally balanced and entropy regularized version.
We show competitive performance with leading methods on both semi-supervised and transfer-learning benchmarks.
arXiv Detail & Related papers (2024-08-04T12:52:44Z) - REBAR: Retrieval-Based Reconstruction for Time-series Contrastive Learning [64.08293076551601]
We propose a novel method of using a learned measure for identifying positive pairs.
Our Retrieval-Based Reconstruction measure measures the similarity between two sequences.
We show that the REBAR error is a predictor of mutual class membership.
arXiv Detail & Related papers (2023-11-01T13:44:45Z) - STRAPPER: Preference-based Reinforcement Learning via Self-training
Augmentation and Peer Regularization [18.811470043767713]
Preference-based reinforcement learning (PbRL) promises to learn a complex reward function with binary human preference.
We present a self-training method along with our proposed peer regularization, which penalizes the reward model memorizing uninformative labels and acquires confident predictions.
arXiv Detail & Related papers (2023-07-19T00:31:58Z) - Learning by Sorting: Self-supervised Learning with Group Ordering
Constraints [75.89238437237445]
This paper proposes a new variation of the contrastive learning objective, Group Ordering Constraints (GroCo)
It exploits the idea of sorting the distances of positive and negative pairs and computing the respective loss based on how many positive pairs have a larger distance than the negative pairs, and thus are not ordered correctly.
We evaluate the proposed formulation on various self-supervised learning benchmarks and show that it not only leads to improved results compared to vanilla contrastive learning but also shows competitive performance to comparable methods in linear probing and outperforms current methods in k-NN performance.
arXiv Detail & Related papers (2023-01-05T11:17:55Z) - Beyond Instance Discrimination: Relation-aware Contrastive
Self-supervised Learning [75.46664770669949]
We present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations.
Our ReCo consistently gains remarkable performance improvements.
arXiv Detail & Related papers (2022-11-02T03:25:28Z) - Fairness and robustness in anti-causal prediction [73.693135253335]
Robustness to distribution shift and fairness have independently emerged as two important desiderata required of machine learning models.
While these two desiderata seem related, the connection between them is often unclear in practice.
By taking this perspective, we draw explicit connections between a common fairness criterion - separation - and a common notion of robustness.
arXiv Detail & Related papers (2022-09-20T02:41:17Z) - Contrasting quadratic assignments for set-based representation learning [5.142415132534397]
standard approach to contrastive learning is to maximize the agreement between different views of the data.
In this work, we note that the approach of considering individual pairs cannot account for both intra-set and inter-set similarities.
We propose to go beyond contrasting individual pairs of objects by focusing on contrasting objects as sets.
arXiv Detail & Related papers (2022-05-31T14:14:36Z) - Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype
Contrast [34.58856143210749]
We present an approach to learn voice-face representations from the talking face videos, without any identity labels.
Previous works employ cross-modal instance discrimination tasks to establish the correlation of voice and face.
We propose the cross-modal prototype contrastive learning (CMPC), which takes advantage of contrastive methods and resists adverse effects of false negatives and deviate positives.
arXiv Detail & Related papers (2022-04-28T07:28:56Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.