Adaptive Hierarchical Similarity Metric Learning with Noisy Labels
- URL: http://arxiv.org/abs/2111.00006v1
- Date: Fri, 29 Oct 2021 02:12:18 GMT
- Title: Adaptive Hierarchical Similarity Metric Learning with Noisy Labels
- Authors: Jiexi Yan, Lei Luo, Cheng Deng and Heng Huang
- Abstract summary: We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
- Score: 138.41576366096137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Metric Learning (DML) plays a critical role in various machine learning
tasks. However, most existing deep metric learning methods with binary
similarity are sensitive to noisy labels, which are widely present in
real-world data. Since these noisy labels often cause severe performance
degradation, it is crucial to enhance the robustness and generalization ability
of DML. In this paper, we propose an Adaptive Hierarchical Similarity Metric
Learning method. It considers two noise-insensitive information, \textit{i.e.},
class-wise divergence and sample-wise consistency. Specifically, class-wise
divergence can effectively excavate richer similarity information beyond binary
in modeling by taking advantage of Hyperbolic metric learning, while
sample-wise consistency can further improve the generalization ability of the
model using contrastive augmentation. More importantly, we design an adaptive
strategy to integrate this information in a unified view. It is noteworthy that
the new method can be extended to any pair-based metric loss. Extensive
experimental results on benchmark datasets demonstrate that our method achieves
state-of-the-art performance compared with current deep metric learning
approaches.
Related papers
- Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Relation Modeling and Distillation for Learning with Noisy Labels [4.556974104115929]
This paper proposes a relation modeling and distillation framework that models inter-sample relationships via self-supervised learning.
The proposed framework can learn discriminative representations for noisy data, which results in superior performance than the existing methods.
arXiv Detail & Related papers (2024-05-30T01:47:27Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Metric Learning as a Service with Covariance Embedding [7.5989847759545155]
Metric learning aims to maximize and minimize inter- and intra-class similarities.
Existing models mainly rely on distance measures to obtain a separable embedding space.
We argue that to enable metric learning as a service for high-performance deep learning applications, we should also wisely deal with inter-class relationships.
arXiv Detail & Related papers (2022-11-28T10:10:59Z) - Dynamic Loss For Robust Learning [17.33444812274523]
This work presents a novel meta-learning based dynamic loss that automatically adjusts the objective functions with the training process to robustly learn a classifier from long-tailed noisy data.
Our method achieves state-of-the-art accuracy on multiple real-world and synthetic datasets with various types of data biases, including CIFAR-10/100, Animal-10N, ImageNet-LT, and Webvision.
arXiv Detail & Related papers (2022-11-22T01:48:25Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - It Takes Two to Tango: Mixup for Deep Metric Learning [16.60855728302127]
State-of-the-art methods focus mostly on sophisticated loss functions or mining strategies.
Mixup is a powerful data augmentation approach interpolating two or more examples and corresponding target labels at a time.
We show that mixing inputs, intermediate representations or embeddings along with target labels significantly improves representations and outperforms state-of-the-art metric learning methods on four benchmark datasets.
arXiv Detail & Related papers (2021-06-09T11:20:03Z) - Meta Transition Adaptation for Robust Deep Learning with Noisy Labels [61.8970957519509]
This study proposes a new meta-transition-learning strategy for the task.
Specifically, through the sound guidance of a small set of meta data with clean labels, the noise transition matrix and the classifier parameters can be mutually ameliorated.
Our method can more accurately extract the transition matrix, naturally following its more robust performance than prior arts.
arXiv Detail & Related papers (2020-06-10T07:27:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.