Deep Metric Learning Assisted by Intra-variance in A Semi-supervised
View of Learning
- URL: http://arxiv.org/abs/2304.10941v1
- Date: Fri, 21 Apr 2023 13:30:32 GMT
- Title: Deep Metric Learning Assisted by Intra-variance in A Semi-supervised
View of Learning
- Authors: Liu Pingping, Liu Zetong, Lang Yijun, Zhou Qiuzhan, Li Qingliang
- Abstract summary: Deep metric learning aims to construct an embedding space where samples of the same class are close to each other, while samples of different classes are far away from each other.
This paper designs a self-supervised generative assisted ranking framework that provides a semi-supervised view of intra-class variance learning scheme for typical supervised deep metric learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep metric learning aims to construct an embedding space where samples of
the same class are close to each other, while samples of different classes are
far away from each other. Most existing deep metric learning methods attempt to
maximize the difference of inter-class features. And semantic related
information is obtained by increasing the distance between samples of different
classes in the embedding space. However, compressing all positive samples
together while creating large margins between different classes unconsciously
destroys the local structure between similar samples. Ignoring the intra-class
variance contained in the local structure between similar samples, the
embedding space obtained from training receives lower generalizability over
unseen classes, which would lead to the network overfitting the training set
and crashing on the test set. To address these considerations, this paper
designs a self-supervised generative assisted ranking framework that provides a
semi-supervised view of intra-class variance learning scheme for typical
supervised deep metric learning. Specifically, this paper performs sample
synthesis with different intensities and diversity for samples satisfying
certain conditions to simulate the complex transformation of intra-class
samples. And an intra-class ranking loss function is designed using the idea of
self-supervised learning to constrain the network to maintain the intra-class
distribution during the training process to capture the subtle intra-class
variance. With this approach, a more realistic embedding space can be obtained
in which global and local structures of samples are well preserved, thus
enhancing the effectiveness of downstream tasks. Extensive experiments on four
benchmarks have shown that this approach surpasses state-of-the-art methods
Related papers
- Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Is it all a cluster game? -- Exploring Out-of-Distribution Detection
based on Clustering in the Embedding Space [7.856998585396422]
It is essential for safety-critical applications of deep neural networks to determine when new inputs are significantly different from the training distribution.
We study the structure and separation of clusters in the embedding space and find that supervised contrastive learning leads to well-separated clusters.
In our analysis of different training methods, clustering strategies, distance metrics, and thresholding approaches, we observe that there is no clear winner.
arXiv Detail & Related papers (2022-03-16T11:22:23Z) - Leveraging Ensembles and Self-Supervised Learning for Fully-Unsupervised
Person Re-Identification and Text Authorship Attribution [77.85461690214551]
Learning from fully-unlabeled data is challenging in Multimedia Forensics problems, such as Person Re-Identification and Text Authorship Attribution.
Recent self-supervised learning methods have shown to be effective when dealing with fully-unlabeled data in cases where the underlying classes have significant semantic differences.
We propose a strategy to tackle Person Re-Identification and Text Authorship Attribution by enabling learning from unlabeled data even when samples from different classes are not prominently diverse.
arXiv Detail & Related papers (2022-02-07T13:08:11Z) - Self-Supervised Learning by Estimating Twin Class Distributions [26.7828253129684]
We present TWIST, a novel self-supervised representation learning method by classifying large-scale unlabeled datasets in an end-to-end way.
We employ a siamese network terminated by a softmax operation to produce twin class distributions of two augmented images.
Specifically, we minimize the entropy of the distribution for each sample to make the class prediction for each sample and maximize the entropy of the mean distribution to make the predictions of different samples diverse.
arXiv Detail & Related papers (2021-10-14T14:39:39Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - Rethinking preventing class-collapsing in metric learning with
margin-based losses [81.22825616879936]
Metric learning seeks embeddings where visually similar instances are close and dissimilar instances are apart.
margin-based losses tend to project all samples of a class onto a single point in the embedding space.
We propose a simple modification to the embedding losses such that each sample selects its nearest same-class counterpart in a batch.
arXiv Detail & Related papers (2020-06-09T09:59:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.