Introspective Deep Metric Learning for Image Retrieval
- URL: http://arxiv.org/abs/2205.04449v2
- Date: Tue, 5 Sep 2023 11:42:07 GMT
- Title: Introspective Deep Metric Learning for Image Retrieval
- Authors: Wenzhao Zheng, Chengkun Wang, Jie Zhou, Jiwen Lu
- Abstract summary: We argue that a good similarity model should consider the semantic discrepancies with caution to better deal with ambiguous images for more robust training.
We propose to represent an image using not only a semantic embedding but also an accompanying uncertainty embedding, which describes the semantic characteristics and ambiguity of an image, respectively.
The proposed IDML framework improves the performance of deep metric learning through uncertainty modeling and attains state-of-the-art results on the widely used CUB-200-2011, Cars196, and Stanford Online Products datasets.
- Score: 80.29866561553483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes an introspective deep metric learning (IDML) framework
for uncertainty-aware comparisons of images. Conventional deep metric learning
methods produce confident semantic distances between images regardless of the
uncertainty level. However, we argue that a good similarity model should
consider the semantic discrepancies with caution to better deal with ambiguous
images for more robust training. To achieve this, we propose to represent an
image using not only a semantic embedding but also an accompanying uncertainty
embedding, which describes the semantic characteristics and ambiguity of an
image, respectively. We further propose an introspective similarity metric to
make similarity judgments between images considering both their semantic
differences and ambiguities. The proposed IDML framework improves the
performance of deep metric learning through uncertainty modeling and attains
state-of-the-art results on the widely used CUB-200-2011, Cars196, and Stanford
Online Products datasets for image retrieval and clustering. We further provide
an in-depth analysis of our framework to demonstrate the effectiveness and
reliability of IDML. Code is available at: https://github.com/wzzheng/IDML.
Related papers
- Knowledge Fused Recognition: Fusing Hierarchical Knowledge for Image Recognition through Quantitative Relativity Modeling and Deep Metric Learning [18.534970504136254]
We propose a novel deep metric learning based method to fuse hierarchical prior knowledge about image classes.
Existing deep metric learning incorporated image classification mainly exploits qualitative relativity between image classes.
A new triplet loss function term that exploits quantitative relativity and aligns distances in model latent space with those in knowledge space is also proposed and incorporated in the proposed dual-modality fusion method.
arXiv Detail & Related papers (2024-07-30T07:24:33Z) - Annotation Cost-Efficient Active Learning for Deep Metric Learning Driven Remote Sensing Image Retrieval [3.2109665109975696]
ANNEAL aims to create a small but informative training set made up of similar and dissimilar image pairs.
The informativeness of image pairs is evaluated by combining uncertainty and diversity criteria.
This way of annotating images significantly reduces the annotation cost compared to annotating images with land-use land-cover class labels.
arXiv Detail & Related papers (2024-06-14T15:08:04Z) - Introspective Deep Metric Learning [91.47907685364036]
We propose an introspective deep metric learning framework for uncertainty-aware comparisons of images.
The proposed IDML framework improves the performance of deep metric learning through uncertainty modeling.
arXiv Detail & Related papers (2023-09-11T16:21:13Z) - Attributable Visual Similarity Learning [90.69718495533144]
This paper proposes an attributable visual similarity learning (AVSL) framework for a more accurate and explainable similarity measure between images.
Motivated by the human semantic similarity cognition, we propose a generalized similarity learning paradigm to represent the similarity between two images with a graph.
Experiments on the CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate significant improvements over existing deep similarity learning methods.
arXiv Detail & Related papers (2022-03-28T17:35:31Z) - Learning Contrastive Representation for Semantic Correspondence [150.29135856909477]
We propose a multi-level contrastive learning approach for semantic matching.
We show that image-level contrastive learning is a key component to encourage the convolutional features to find correspondence between similar objects.
arXiv Detail & Related papers (2021-09-22T18:34:14Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Intrinsic Image Captioning Evaluation [53.51379676690971]
We propose a learning based metrics for image captioning, which we call Intrinsic Image Captioning Evaluation(I2CE)
Experiment results show that our proposed method can keep robust performance and give more flexible scores to candidate captions when encountered with semantic similar expression or less aligned semantics.
arXiv Detail & Related papers (2020-12-14T08:36:05Z) - DeepSim: Semantic similarity metrics for learned image registration [6.789370732159177]
We propose a semantic similarity metric for image registration.
Our approach learns dataset-specific features that drive the optimization of a learning-based registration model.
arXiv Detail & Related papers (2020-11-11T12:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.