Metric Learning as a Service with Covariance Embedding
- URL: http://arxiv.org/abs/2211.15197v1
- Date: Mon, 28 Nov 2022 10:10:59 GMT
- Title: Metric Learning as a Service with Covariance Embedding
- Authors: Imam Mustafa Kamal, Hyerim Bae, Ling Liu
- Abstract summary: Metric learning aims to maximize and minimize inter- and intra-class similarities.
Existing models mainly rely on distance measures to obtain a separable embedding space.
We argue that to enable metric learning as a service for high-performance deep learning applications, we should also wisely deal with inter-class relationships.
- Score: 7.5989847759545155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the emergence of deep learning, metric learning has gained significant
popularity in numerous machine learning tasks dealing with complex and
large-scale datasets, such as information retrieval, object recognition and
recommendation systems. Metric learning aims to maximize and minimize inter-
and intra-class similarities. However, existing models mainly rely on distance
measures to obtain a separable embedding space and implicitly maximize the
intra-class similarity while neglecting the inter-class relationship. We argue
that to enable metric learning as a service for high-performance deep learning
applications, we should also wisely deal with inter-class relationships to
obtain a more advanced and meaningful embedding space representation. In this
paper, a novel metric learning is presented as a service methodology that
incorporates covariance to signify the direction of the linear relationship
between data points in an embedding space. Unlike conventional metric learning,
our covariance-embedding-enhanced approach enables metric learning as a service
to be more expressive for computing similar or dissimilar measures and can
capture positive, negative, or neutral relationships. Extensive experiments
conducted using various benchmark datasets, including natural, biomedical, and
facial images, demonstrate that the proposed model as a service with
covariance-embedding optimizations can obtain higher-quality, more separable,
and more expressive embedding representations than existing models.
Related papers
- Semantic-Enhanced Relational Metric Learning for Recommender Systems [27.330164862413184]
Recently, metric learning methods have been received great attention in recommendation community, which is inspired by the translation mechanism in knowledge graph.
We propose a joint Semantic-Enhanced Metric Learning framework to tackle the problem in recommender systems.
Specifically the semantic signal is first extracted from the target reviews containing abundant features and personalized user preferences.
A novel regression model is then designed via leveraging the extracted semantic signal to improve the discriminative ability of original relation-based training process.
arXiv Detail & Related papers (2024-06-07T11:54:50Z) - DiffKendall: A Novel Approach for Few-Shot Learning with Differentiable
Kendall's Rank Correlation [16.038667928358763]
Few-shot learning aims to adapt models trained on the base dataset to novel tasks where the categories were not seen by the model before.
This often leads to a relatively uniform distribution of feature values across channels on novel classes.
We show that the importance ranking of feature channels is a more reliable indicator for few-shot learning than geometric similarity metrics.
arXiv Detail & Related papers (2023-07-28T05:32:56Z) - Self-Taught Metric Learning without Labels [47.832107446521626]
We present a novel self-taught framework for unsupervised metric learning.
It alternates between predicting class-equivalence relations between data through a moving average of an embedding model and learning the model with the predicted relations as pseudo labels.
arXiv Detail & Related papers (2022-05-04T05:48:40Z) - Weak Augmentation Guided Relational Self-Supervised Learning [80.0680103295137]
We introduce a novel relational self-supervised learning (ReSSL) framework that learns representations by modeling the relationship between different instances.
Our proposed method employs sharpened distribution of pairwise similarities among different instances as textitrelation metric.
Experimental results show that our proposed ReSSL substantially outperforms the state-of-the-art methods across different network architectures.
arXiv Detail & Related papers (2022-03-16T16:14:19Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - ReSSL: Relational Self-Supervised Learning with Weak Augmentation [68.47096022526927]
Self-supervised learning has achieved great success in learning visual representations without data annotations.
We introduce a novel relational SSL paradigm that learns representations by modeling the relationship between different instances.
Our proposed ReSSL significantly outperforms the previous state-of-the-art algorithms in terms of both performance and training efficiency.
arXiv Detail & Related papers (2021-07-20T06:53:07Z) - Embedding Transfer with Label Relaxation for Improved Metric Learning [43.94511888670419]
We present a novel method for embedding transfer, a task of transferring knowledge of a learned embedding model to another.
Our method exploits pairwise similarities between samples in the source embedding space as the knowledge, and transfers them through a loss used for learning target embedding models.
arXiv Detail & Related papers (2021-03-27T13:35:03Z) - Memory-Augmented Relation Network for Few-Shot Learning [114.47866281436829]
In this work, we investigate a new metric-learning method, Memory-Augmented Relation Network (MRN)
In MRN, we choose the samples that are visually similar from the working context, and perform weighted information propagation to attentively aggregate helpful information from chosen ones to enhance its representation.
We empirically demonstrate that MRN yields significant improvement over its ancestor and achieves competitive or even better performance when compared with other few-shot learning approaches.
arXiv Detail & Related papers (2020-05-09T10:09:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.