Multi-Similarity Contrastive Learning
- URL: http://arxiv.org/abs/2307.02712v1
- Date: Thu, 6 Jul 2023 01:26:01 GMT
- Title: Multi-Similarity Contrastive Learning
- Authors: Emily Mu, John Guttag, Maggie Makar
- Abstract summary: We propose a novel multi-similarity contrastive loss (MSCon) that learns generalizable embeddings by jointly utilizing supervision from multiple metrics of similarity.
Our method automatically learns contrastive similarity weightings based on the uncertainty in the corresponding similarity.
We show empirically that networks trained with MSCon outperform state-of-the-art baselines on in-domain and out-of-domain settings.
- Score: 4.297070083645049
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given a similarity metric, contrastive methods learn a representation in
which examples that are similar are pushed together and examples that are
dissimilar are pulled apart. Contrastive learning techniques have been utilized
extensively to learn representations for tasks ranging from image
classification to caption generation. However, existing contrastive learning
approaches can fail to generalize because they do not take into account the
possibility of different similarity relations. In this paper, we propose a
novel multi-similarity contrastive loss (MSCon), that learns generalizable
embeddings by jointly utilizing supervision from multiple metrics of
similarity. Our method automatically learns contrastive similarity weightings
based on the uncertainty in the corresponding similarity, down-weighting
uncertain tasks and leading to better out-of-domain generalization to new
tasks. We show empirically that networks trained with MSCon outperform
state-of-the-art baselines on in-domain and out-of-domain settings.
Related papers
- Extending Momentum Contrast with Cross Similarity Consistency
Regularization [5.085461418671174]
We present Extended Momentum Contrast, a self-supervised representation learning method founded upon the legacy of the momentum-encoder unit proposed in the MoCo family configurations.
Under the cross consistency regularization rule, we argue that semantic representations associated with any pair of images (positive or negative) should preserve their cross-similarity.
We report a competitive performance on the standard Imagenet-1K linear head classification benchmark.
arXiv Detail & Related papers (2022-06-07T20:06:56Z) - Unsupervised Voice-Face Representation Learning by Cross-Modal Prototype
Contrast [34.58856143210749]
We present an approach to learn voice-face representations from the talking face videos, without any identity labels.
Previous works employ cross-modal instance discrimination tasks to establish the correlation of voice and face.
We propose the cross-modal prototype contrastive learning (CMPC), which takes advantage of contrastive methods and resists adverse effects of false negatives and deviate positives.
arXiv Detail & Related papers (2022-04-28T07:28:56Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - Weak Augmentation Guided Relational Self-Supervised Learning [80.0680103295137]
We introduce a novel relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances.
Our proposed method employs sharpened distribution of pairwise similarities among different instances as textitrelation metric.
Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures.
arXiv Detail & Related papers (2022-03-16T16:14:19Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Instance Similarity Learning for Unsupervised Feature Representation [83.31011038813459]
We propose an instance similarity learning (ISL) method for unsupervised feature representation.
We employ the Generative Adversarial Networks (GAN) to mine the underlying feature manifold.
Experiments on image classification demonstrate the superiority of our method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-05T16:42:06Z) - ReSSL: Relational Self-Supervised Learning with Weak Augmentation [68.47096022526927]
Self-supervised learning has achieved great success in learning visual representations without data annotations.
We introduce a novel relational SSL paradigm that learns representations by modeling the relationship between different instances.
Our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency.
arXiv Detail & Related papers (2021-07-20T06:53:07Z) - Cross-Domain Similarity Learning for Face Recognition in Unseen Domains [90.35908506994365]
We introduce a novel cross-domain metric learning loss, which we dub Cross-Domain Triplet (CDT) loss, to improve face recognition in unseen domains.
The CDT loss encourages learning semantically meaningful features by enforcing compact feature clusters of identities from one domain.
Our method does not require careful hard-pair sample mining and filtering strategy during training.
arXiv Detail & Related papers (2021-03-12T19:48:01Z) - Contrastive Behavioral Similarity Embeddings for Generalization in
Reinforcement Learning [41.85795493411269]
We introduce a theoretically motivated policy similarity metric (PSM) for measuring behavioral similarity between states.
PSM assigns high similarity to states for which the optimal policies in those states as well as in future states are similar.
We present a contrastive representation learning procedure to embed any state similarity metric, which we instantiate with PSM to obtain policy similarity embeddings.
arXiv Detail & Related papers (2021-01-13T18:55:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.