Batch Decorrelation for Active Metric Learning
- URL: http://arxiv.org/abs/2005.10008v2
- Date: Sat, 23 May 2020 12:52:04 GMT
- Title: Batch Decorrelation for Active Metric Learning
- Authors: Priyadarshini K, Ritesh Goru, Siddhartha Chaudhuri and Subhasis
Chaudhuri
- Abstract summary: We present an active learning strategy for training parametric models of distance metrics, given triplet-based similarity assessments.
In contrast to prior work on class-based learning, we focus on em metrics that express the em degree of (dis)similarity between objects.
- Score: 21.99577268213412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an active learning strategy for training parametric models of
distance metrics, given triplet-based similarity assessments: object $x_i$ is
more similar to object $x_j$ than to $x_k$. In contrast to prior work on
class-based learning, where the fundamental goal is classification and any
implicit or explicit metric is binary, we focus on {\em perceptual} metrics
that express the {\em degree} of (dis)similarity between objects. We find that
standard active learning approaches degrade when annotations are requested for
{\em batches} of triplets at a time: our studies suggest that correlation among
triplets is responsible. In this work, we propose a novel method to {\em
decorrelate} batches of triplets, that jointly balances informativeness and
diversity while decoupling the choice of heuristic for each criterion.
Experiments indicate our method is general, adaptable, and outperforms the
state-of-the-art.
Related papers
- Human-in-the-loop: Towards Label Embeddings for Measuring Classification Difficulty [14.452983136429967]
In supervised learning, uncertainty can already occur in the first stage of the training process, the annotation phase.
The main idea of this work is to drop the assumption of a ground truth label and instead embed the annotations into a multidimensional space.
The methods developed in this paper readily extend to various situations where multiple annotators independently label instances.
arXiv Detail & Related papers (2023-11-15T11:23:15Z) - DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable
Kendall's Rank Correlation [16.038667928358763]
Few-shot learning aims to adapt models trained on the base dataset to novel tasks where the categories were not seen by the model before.
This often leads to a relatively uniform distribution of feature values across channels on novel classes.
We show that the importance ranking of feature channels is a more reliable indicator for few-shot learning than geometric similarity metrics.
arXiv Detail & Related papers (2023-07-28T05:32:56Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Maximizing Conditional Entropy for Batch-Mode Active Learning of
Perceptual Metrics [14.777274711706653]
We present a novel approach for batch mode active metric learning using the Maximum Entropy Principle.
We take advantage of the monotonically increasing submodular entropy function to construct an efficient greedy algorithm.
Our approach is the first batch-mode active metric learning method to define a unified score that balances informativeness and diversity for an entire batch of triplets.
arXiv Detail & Related papers (2021-02-15T06:55:17Z) - Dynamic Semantic Matching and Aggregation Network for Few-shot Intent
Detection [69.2370349274216]
Few-shot Intent Detection is challenging due to the scarcity of available annotated utterances.
Semantic components are distilled from utterances via multi-head self-attention.
Our method provides a comprehensive matching measure to enhance representations of both labeled and unlabeled instances.
arXiv Detail & Related papers (2020-10-06T05:16:38Z) - Unsupervised Deep Metric Learning via Orthogonality based Probabilistic
Loss [27.955068939695042]
Existing state-of-the-art metric learning approaches require class labels to learn a metric.
We propose an unsupervised approach that learns a metric without making use of class labels.
The pseudo-labels are used to form triplets of examples, which guide the metric learning.
arXiv Detail & Related papers (2020-08-22T17:13:33Z) - Metric Learning vs Classification for Disentangled Music Representation
Learning [36.74680586571013]
We present a single representation learning framework that elucidates the relationship between metric learning, classification, and disentanglement in a holistic manner.
We find that classification-based models are generally advantageous for training time, similarity retrieval, and auto-tagging, while deep metric learning exhibits better performance for triplet-prediction.
arXiv Detail & Related papers (2020-08-09T13:53:12Z) - Learning from Aggregate Observations [82.44304647051243]
We study the problem of learning from aggregate observations where supervision signals are given to sets of instances.
We present a general probabilistic framework that accommodates a variety of aggregate observations.
Simple maximum likelihood solutions can be applied to various differentiable models.
arXiv Detail & Related papers (2020-04-14T06:18:50Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Learning to Compare Relation: Semantic Alignment for Few-Shot Learning [48.463122399494175]
We present a novel semantic alignment model to compare relations, which is robust to content misalignment.
We conduct extensive experiments on several few-shot learning datasets.
arXiv Detail & Related papers (2020-02-29T08:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.