Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning
- URL: http://arxiv.org/abs/2211.16264v1
- Date: Tue, 29 Nov 2022 14:52:38 GMT
- Title: Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning
- Authors: Zheren Fu, Zhendong Mao, Bo Hu, An-An Liu, Yongdong Zhang
- Abstract summary: We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
- Score: 99.14132861655223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep metric learning aims to learn an embedding space, where semantically
similar samples are close together and dissimilar ones are repelled against. To
explore more hard and informative training signals for augmentation and
generalization, recent methods focus on generating synthetic samples to boost
metric learning losses. However, these methods just use the deterministic and
class-independent generations (e.g., simple linear interpolation), which only
can cover the limited part of distribution spaces around original samples. They
have overlooked the wide characteristic changes of different classes and can
not model abundant intra-class variations for generations. Therefore, generated
samples not only lack rich semantics within the certain class, but also might
be noisy signals to disturb training. In this paper, we propose a novel
intra-class adaptive augmentation (IAA) framework for deep metric learning. We
reasonably estimate intra-class variations for every class and generate
adaptive synthetic samples to support hard samples mining and boost metric
learning losses. Further, for most datasets that have a few samples within the
class, we propose the neighbor correction to revise the inaccurate estimations,
according to our correlation discovery where similar classes generally have
similar variation distributions. Extensive experiments on five benchmarks show
our method significantly improves and outperforms the state-of-the-art methods
on retrieval performances by 3%-6%. Our code is available at
https://github.com/darkpromise98/IAA
Related papers
- Adaptive Margin Global Classifier for Exemplar-Free Class-Incremental Learning [3.4069627091757178]
Existing methods mainly focus on handling biased learning.
We introduce a Distribution-Based Global (DBGC) to avoid bias factors in existing methods, such as data imbalance and sampling.
More importantly, the compromised distributions of old classes are simulated via a simple operation, variance (VE).
This loss is proven equivalent to an Adaptive Margin Softmax Cross Entropy (AMarX)
arXiv Detail & Related papers (2024-09-20T07:07:23Z) - Adaptive Intra-Class Variation Contrastive Learning for Unsupervised Person Re-Identification [10.180143197144803]
We propose an adaptive intra-class variation contrastive learning algorithm for unsupervised Re-ID, called AdaInCV.
The algorithm quantitatively evaluates the learning ability of the model for each class by considering the intra-class variations after clustering.
To be more specific, two new strategies are proposed: Adaptive Sample Mining (AdaSaM) and Adaptive Outlier Filter (AdaOF)
arXiv Detail & Related papers (2024-04-06T15:48:14Z) - Deep Metric Learning Assisted by Intra-variance in A Semi-supervised
View of Learning [0.0]
Deep metric learning aims to construct an embedding space where samples of the same class are close to each other, while samples of different classes are far away from each other.
This paper designs a self-supervised generative assisted ranking framework that provides a semi-supervised view of intra-class variance learning scheme for typical supervised deep metric learning.
arXiv Detail & Related papers (2023-04-21T13:30:32Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Lightweight Conditional Model Extrapolation for Streaming Data under
Class-Prior Shift [27.806085423595334]
We introduce LIMES, a new method for learning with non-stationary streaming data.
We learn a single set of model parameters from which a specific classifier for any specific data distribution is derived.
Experiments on a set of exemplary tasks using Twitter data show that LIMES achieves higher accuracy than alternative approaches.
arXiv Detail & Related papers (2022-06-10T15:19:52Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - Attentional-Biased Stochastic Gradient Descent [74.49926199036481]
We present a provable method (named ABSGD) for addressing the data imbalance or label noise problem in deep learning.
Our method is a simple modification to momentum SGD where we assign an individual importance weight to each sample in the mini-batch.
ABSGD is flexible enough to combine with other robust losses without any additional cost.
arXiv Detail & Related papers (2020-12-13T03:41:52Z) - Rethinking preventing class-collapsing in metric learning with
margin-based losses [81.22825616879936]
Metric learning seeks embeddings where visually similar instances are close and dissimilar instances are apart.
margin-based losses tend to project all samples of a class onto a single point in the embedding space.
We propose a simple modification to the embedding losses such that each sample selects its nearest same-class counterpart in a batch.
arXiv Detail & Related papers (2020-06-09T09:59:25Z) - Minority Class Oversampling for Tabular Data with Deep Generative Models [4.976007156860967]
We study the ability of deep generative models to provide realistic samples that improve performance on imbalanced classification tasks via oversampling.
Our experiments show that the way the method of sampling does not affect quality, but runtime varies widely.
We also observe that the improvements in terms of performance metric, while shown to be significant, often are minor in absolute terms.
arXiv Detail & Related papers (2020-05-07T21:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.