ContraSim -- A Similarity Measure Based on Contrastive Learning
- URL: http://arxiv.org/abs/2303.16992v2
- Date: Tue, 30 May 2023 09:47:33 GMT
- Title: ContraSim -- A Similarity Measure Based on Contrastive Learning
- Authors: Adir Rahamim, Yonatan Belinkov
- Abstract summary: We develop a new similarity measure, dubbed ContraSim, based on contrastive learning.
ContraSim learns a parameterized measure by using both similar and dissimilar examples.
In all cases, ContraSim achieves much higher accuracy than previous similarity measures.
- Score: 28.949004915740776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has compared neural network representations via similarity-based
analyses to improve model interpretation. The quality of a similarity measure
is typically evaluated by its success in assigning a high score to
representations that are expected to be matched. However, existing similarity
measures perform mediocrely on standard benchmarks. In this work, we develop a
new similarity measure, dubbed ContraSim, based on contrastive learning. In
contrast to common closed-form similarity measures, ContraSim learns a
parameterized measure by using both similar and dissimilar examples. We perform
an extensive experimental evaluation of our method, with both language and
vision models, on the standard layer prediction benchmark and two new
benchmarks that we introduce: the multilingual benchmark and the image-caption
benchmark. In all cases, ContraSim achieves much higher accuracy than previous
similarity measures, even when presented with challenging examples. Finally,
ContraSim is more suitable for the analysis of neural networks, revealing new
insights not captured by previous measures.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - Semantic similarity prediction is better than other semantic similarity
measures [5.176134438571082]
We argue that when we are only interested in measuring the semantic similarity, it is better to directly predict the similarity using a fine-tuned model for such a task.
Using a fine-tuned model for the Semantic Textual Similarity Benchmark tasks (STS-B) from the GLUE benchmark, we define the STSScore approach and show that the resulting similarity is better aligned with our expectations on a robust semantic similarity measure than other approaches.
arXiv Detail & Related papers (2023-09-22T08:11:01Z) - Contrastive Principal Component Learning: Modeling Similarity by
Augmentation Overlap [50.48888534815361]
We propose a novel Contrastive Principal Component Learning (CPCL) method composed of a contrastive-like loss and an on-the-fly projection loss.
By CPCL, the learned low-dimensional embeddings theoretically preserve the similarity of augmentation distribution between samples.
arXiv Detail & Related papers (2022-06-01T13:03:58Z) - Comparing in context: Improving cosine similarity measures with a metric
tensor [0.0]
Cosine similarity is a widely used measure of the relatedness of pre-trained word embeddings, trained on a language modeling goal.
We propose instead the use of an extended cosine similarity measure to improve performance on that task, with gains in interpretability.
We learn contextualized metrics and compare the results with the baseline values obtained using the standard cosine similarity measure, which consistently shows improvement.
We also train a contextualized similarity measure for both SimLex-999 and WordSim-353, comparing the results with the corresponding baselines, and using these datasets as independent test sets for the all-context similarity measure learned on
arXiv Detail & Related papers (2022-03-28T18:04:26Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - SimMatch: Semi-supervised Learning with Similarity Matching [43.61802702362675]
SimMatch is a new semi-supervised learning framework that considers semantic similarity and instance similarity.
With 400 epochs of training, SimMatch achieves 67.2%, and 74.4% Top-1 Accuracy with 1% and 10% labeled examples on ImageNet.
arXiv Detail & Related papers (2022-03-14T08:08:48Z) - MNet-Sim: A Multi-layered Semantic Similarity Network to Evaluate
Sentence Similarity [0.0]
Similarity is a comparative-subjective measure that varies with the domain within which it is considered.
This paper presents a multi-layered semantic similarity network model built upon multiple similarity measures.
It is shown to have demonstrated better performance scores in assessing sentence similarity.
arXiv Detail & Related papers (2021-11-09T20:43:18Z) - Instance-Level Relative Saliency Ranking with Graph Reasoning [126.09138829920627]
We present a novel unified model to segment salient instances and infer relative saliency rank order.
A novel loss function is also proposed to effectively train the saliency ranking branch.
experimental results demonstrate that our proposed model is more effective than previous methods.
arXiv Detail & Related papers (2021-07-08T13:10:42Z) - Uncertainty-Aware Few-Shot Image Classification [118.72423376789062]
Few-shot image classification learns to recognize new categories from limited labelled data.
We propose Uncertainty-Aware Few-Shot framework for image classification.
arXiv Detail & Related papers (2020-10-09T12:26:27Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z) - Determining Image similarity with Quasi-Euclidean Metric [0.0]
We evaluate Quasi-Euclidean metric as an image similarity measure and analyze how it fares against the existing standard ways like SSIM and Euclidean metric.
In some cases, our methodology projected remarkable performance and it is also interesting to note that our implementation proves to be a step ahead in recognizing similarity.
arXiv Detail & Related papers (2020-06-25T18:12:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.