Supervised Categorical Metric Learning with Schatten p-Norms
- URL: http://arxiv.org/abs/2002.11246v1
- Date: Wed, 26 Feb 2020 01:17:12 GMT
- Title: Supervised Categorical Metric Learning with Schatten p-Norms
- Authors: Xuhui Fan, Eric Gaussier
- Abstract summary: We propose a method, called CPML for emphcategorical projected metric learning, to address the problem of metric learning in categorical data.
We make use of the Value Distance Metric to represent our data and propose new distances based on this representation.
We then show how to efficiently learn new metrics.
- Score: 10.995886294197412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Metric learning has been successful in learning new metrics adapted to
numerical datasets. However, its development on categorical data still needs
further exploration. In this paper, we propose a method, called CPML for
\emph{categorical projected metric learning}, that tries to efficiently~(i.e.
less computational time and better prediction accuracy) address the problem of
metric learning in categorical data. We make use of the Value Distance Metric
to represent our data and propose new distances based on this representation.
We then show how to efficiently learn new metrics. We also generalize several
previous regularizers through the Schatten $p$-norm and provides a
generalization bound for it that complements the standard generalization bound
for metric learning. Experimental results show that our method provides
Related papers
- Piecewise-Linear Manifolds for Deep Metric Learning [8.670873561640903]
Unsupervised deep metric learning focuses on learning a semantic representation space using only unlabeled data.
We propose to model the high-dimensional data manifold using a piecewise-linear approximation, with each low-dimensional linear piece approximating the data manifold in a small neighborhood of a point.
We empirically show that this similarity estimate correlates better with the ground truth than the similarity estimates of current state-of-the-art techniques.
arXiv Detail & Related papers (2024-03-22T06:22:20Z) - Rapid Adaptation in Online Continual Learning: Are We Evaluating It
Right? [135.71855998537347]
We revisit the common practice of evaluating adaptation of Online Continual Learning (OCL) algorithms through the metric of online accuracy.
We show that this metric is unreliable, as even vacuous blind classifiers can achieve unrealistically high online accuracy.
Existing OCL algorithms can also achieve high online accuracy, but perform poorly in retaining useful information.
arXiv Detail & Related papers (2023-05-16T08:29:33Z) - Hyperbolic Vision Transformers: Combining Improvements in Metric
Learning [116.13290702262248]
We propose a new hyperbolic-based model for metric learning.
At the core of our method is a vision transformer with output embeddings mapped to hyperbolic space.
We evaluate the proposed model with six different formulations on four datasets.
arXiv Detail & Related papers (2022-03-21T09:48:23Z) - Adaptive neighborhood Metric learning [184.95321334661898]
We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
arXiv Detail & Related papers (2022-01-20T17:26:37Z) - Efficient Nearest Neighbor Language Models [114.40866461741795]
Non-parametric neural language models (NLMs) learn predictive distributions of text utilizing an external datastore.
We show how to achieve up to a 6x speed-up in inference speed while retaining comparable performance.
arXiv Detail & Related papers (2021-09-09T12:32:28Z) - Meta-Generating Deep Attentive Metric for Few-shot Classification [53.07108067253006]
We present a novel deep metric meta-generation method to generate a specific metric for a new few-shot learning task.
In this study, we structure the metric using a three-layer deep attentive network that is flexible enough to produce a discriminative metric for each task.
We gain surprisingly obvious performance improvement over state-of-the-art competitors, especially in the challenging cases.
arXiv Detail & Related papers (2020-12-03T02:07:43Z) - MLAS: Metric Learning on Attributed Sequences [13.689383530299502]
Conventional approaches to metric learning mainly focus on learning the Mahalanobis distance metric on data attributes.
We propose a deep learning framework, called MLAS, to learn a distance metric that effectively measures dissimilarities between attributed sequences.
arXiv Detail & Related papers (2020-11-08T19:35:42Z) - Interpretable Locally Adaptive Nearest Neighbors [8.052709336750821]
We develop a method that allows learning locally adaptive metrics.
These local metrics not only improve performance but are naturally interpretable.
We conduct a number of experiments on synthetic data sets, and show its usefulness on real-world benchmark data sets.
arXiv Detail & Related papers (2020-11-08T05:27:50Z) - Provably Robust Metric Learning [98.50580215125142]
We show that existing metric learning algorithms can result in metrics that are less robust than the Euclidean distance.
We propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations.
Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors.
arXiv Detail & Related papers (2020-06-12T09:17:08Z) - Metric Learning for Ordered Labeled Trees with pq-grams [11.284638114256712]
We propose a new metric learning approach for tree-structured data with pq-grams.
The pq-gram distance is a distance for ordered labeled trees, and has much lower computation cost than the tree edit distance.
We empirically show that the proposed approach achieves competitive results with the state-of-the-art edit distance-based methods.
arXiv Detail & Related papers (2020-03-09T08:04:47Z) - Variational Metric Scaling for Metric-Based Meta-Learning [37.392840869320686]
We recast metric-based meta-learning from a prototypical perspective and develop a variational metric scaling framework.
Our method is end-to-end without any pre-training and can be used as a simple plug-and-play module for existing metric-based meta-algorithms.
arXiv Detail & Related papers (2019-12-26T09:00:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.