Identifying Ambiguous Similarity Conditions via Semantic Matching
- URL: http://arxiv.org/abs/2204.04053v1
- Date: Fri, 8 Apr 2022 13:15:55 GMT
- Title: Identifying Ambiguous Similarity Conditions via Semantic Matching
- Authors: Han-Jia Ye, Yi Shi, De-Chuan Zhan
- Abstract summary: We introduce Weakly Supervised Conditional Similarity Learning (WS-CSL)
WS-CSL learns multiple embeddings to match semantic conditions without explicit condition labels such as "can fly"
We propose the Distance Induced Semantic COndition VERification Network (DiscoverNet), which characterizes the instance-instance and triplets-condition relations in a "decompose-and-fuse" manner.
- Score: 49.06931755266372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rich semantics inside an image result in its ambiguous relationship with
others, i.e., two images could be similar in one condition but dissimilar in
another. Given triplets like "aircraft" is similar to "bird" than "train",
Weakly Supervised Conditional Similarity Learning (WS-CSL) learns multiple
embeddings to match semantic conditions without explicit condition labels such
as "can fly". However, similarity relationships in a triplet are uncertain
except providing a condition. For example, the previous comparison becomes
invalid once the conditional label changes to "is vehicle". To this end, we
introduce a novel evaluation criterion by predicting the comparison's
correctness after assigning the learned embeddings to their optimal conditions,
which measures how much WS-CSL could cover latent semantics as the supervised
model. Furthermore, we propose the Distance Induced Semantic COndition
VERification Network (DiscoverNet), which characterizes the instance-instance
and triplets-condition relations in a "decompose-and-fuse" manner. To make the
learned embeddings cover all semantics, DiscoverNet utilizes a set module or an
additional regularizer over the correspondence between a triplet and a
condition. DiscoverNet achieves state-of-the-art performance on benchmarks like
UT-Zappos-50k and Celeb-A w.r.t. different criteria.
Related papers
- Semantic similarity prediction is better than other semantic similarity
measures [5.176134438571082]
We argue that when we are only interested in measuring the semantic similarity, it is better to directly predict the similarity using a fine-tuned model for such a task.
Using a fine-tuned model for the Semantic Textual Similarity Benchmark tasks (STS-B) from the GLUE benchmark, we define the STSScore approach and show that the resulting similarity is better aligned with our expectations on a robust semantic similarity measure than other approaches.
arXiv Detail & Related papers (2023-09-22T08:11:01Z) - C-STS: Conditional Semantic Textual Similarity [70.09137422955506]
We propose a novel task called Conditional STS (C-STS)
It measures sentences' similarity conditioned on a feature described in natural language (hereon, condition)
C-STS's advantages are two-fold: it reduces the subjectivity and ambiguity of STS and enables fine-grained language model evaluation through diverse natural language conditions.
arXiv Detail & Related papers (2023-05-24T12:18:50Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - Duality-Induced Regularizer for Semantic Matching Knowledge Graph
Embeddings [70.390286614242]
We propose a novel regularizer -- namely, DUality-induced RegulArizer (DURA) -- which effectively encourages the entities with similar semantics to have similar embeddings.
Experiments demonstrate that DURA consistently and significantly improves the performance of state-of-the-art semantic matching models.
arXiv Detail & Related papers (2022-03-24T09:24:39Z) - SimMatch: Semi-supervised Learning with Similarity Matching [43.61802702362675]
SimMatch is a new semi-supervised learning framework that considers semantic similarity and instance similarity.
With 400 epochs of training, SimMatch achieves 67.2%, and 74.4% Top-1 Accuracy with 1% and 10% labeled examples on ImageNet.
arXiv Detail & Related papers (2022-03-14T08:08:48Z) - Semantic Answer Similarity for Evaluating Question Answering Models [2.279676596857721]
SAS is a cross-encoder-based metric for the estimation of semantic answer similarity.
We show that semantic similarity metrics based on recent transformer models correlate much better with human judgment than traditional lexical similarity metrics.
arXiv Detail & Related papers (2021-08-13T09:12:27Z) - A Theory-Driven Self-Labeling Refinement Method for Contrastive
Representation Learning [111.05365744744437]
Unsupervised contrastive learning labels crops of the same image as positives, and other image crops as negatives.
In this work, we first prove that for contrastive learning, inaccurate label assignment heavily impairs its generalization for semantic instance discrimination.
Inspired by this theory, we propose a novel self-labeling refinement approach for contrastive learning.
arXiv Detail & Related papers (2021-06-28T14:24:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.