Self-Supervised Metric Learning in Multi-View Data: A Downstream Task
Perspective
- URL: http://arxiv.org/abs/2106.07138v1
- Date: Mon, 14 Jun 2021 02:34:33 GMT
- Title: Self-Supervised Metric Learning in Multi-View Data: A Downstream Task
Perspective
- Authors: Shulei Wang
- Abstract summary: We study how self-supervised metric learning can benefit downstream tasks in the context of multi-view data.
We show that the target distance of metric learning satisfies several desired properties for the downstream tasks.
Our analysis characterizes the improvement by self-supervised metric learning on four commonly used downstream tasks.
- Score: 2.01243755755303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised metric learning has been a successful approach for learning a
distance from an unlabeled dataset. The resulting distance is broadly useful
for improving various distance-based downstream tasks, even when no information
from downstream tasks is utilized in the metric learning stage. To gain
insights into this approach, we develop a statistical framework to
theoretically study how self-supervised metric learning can benefit downstream
tasks in the context of multi-view data. Under this framework, we show that the
target distance of metric learning satisfies several desired properties for the
downstream tasks. On the other hand, our investigation suggests the target
distance can be further improved by moderating each direction's weights. In
addition, our analysis precisely characterizes the improvement by
self-supervised metric learning on four commonly used downstream tasks: sample
identification, two-sample testing, $k$-means clustering, and $k$-nearest
neighbor classification. As a by-product, we propose a simple spectral method
for self-supervised metric learning, which is computationally efficient and
minimax optimal for estimating target distance. Finally, numerical experiments
are presented to support the theoretical results in the paper.
Related papers
- Anchor-aware Deep Metric Learning for Audio-visual Retrieval [11.675472891647255]
Metric learning aims at capturing the underlying data structure and enhancing the performance of tasks like audio-visual cross-modal retrieval (AV-CMR)
Recent works employ sampling methods to select impactful data points from the embedding space during training.
However, the model training fails to fully explore the space due to the scarcity of training data points.
We propose an innovative Anchor-aware Deep Metric Learning (AADML) method to address this challenge.
arXiv Detail & Related papers (2024-04-21T22:44:44Z) - Piecewise-Linear Manifolds for Deep Metric Learning [8.670873561640903]
Unsupervised deep metric learning focuses on learning a semantic representation space using only unlabeled data.
We propose to model the high-dimensional data manifold using a piecewise-linear approximation, with each low-dimensional linear piece approximating the data manifold in a small neighborhood of a point.
We empirically show that this similarity estimate correlates better with the ground truth than the similarity estimates of current state-of-the-art techniques.
arXiv Detail & Related papers (2024-03-22T06:22:20Z) - Few-shot Metric Learning: Online Adaptation of Embedding for Retrieval [37.601607544184915]
Metric learning aims to build a distance metric typically by learning an effective embedding function that maps similar objects into nearby points.
Despite recent advances in deep metric learning, it remains challenging for the learned metric to generalize to unseen classes with a substantial domain gap.
We propose a new problem of few-shot metric learning that aims to adapt the embedding function to the target domain with only a few annotated data.
arXiv Detail & Related papers (2022-11-14T05:10:17Z) - Composite Learning for Robust and Effective Dense Predictions [81.2055761433725]
Multi-task learning promises better model generalization on a target task by jointly optimizing it with an auxiliary task.
We find that jointly training a dense prediction (target) task with a self-supervised (auxiliary) task can consistently improve the performance of the target task, while eliminating the need for labeling auxiliary tasks.
arXiv Detail & Related papers (2022-10-13T17:59:16Z) - Finding Significant Features for Few-Shot Learning using Dimensionality
Reduction [0.0]
This module helps to improve the accuracy performance by allowing the similarity function, given by the metric learning method, to have more discriminative features for the classification.
Our method outperforms the metric learning baselines in the miniImageNet dataset by around 2% in accuracy performance.
arXiv Detail & Related papers (2021-07-06T16:36:57Z) - ReMP: Rectified Metric Propagation for Few-Shot Learning [67.96021109377809]
A rectified metric space is learned to maintain the metric consistency from training to testing.
Numerous analyses indicate that a simple modification of the objective can yield substantial performance gains.
The proposed ReMP is effective and efficient, and outperforms the state of the arts on various standard few-shot learning datasets.
arXiv Detail & Related papers (2020-12-02T00:07:53Z) - Proxy Network for Few Shot Learning [9.529264466445236]
We propose a few-shot learning algorithm called proxy network under the architecture of meta-learning.
We conduct experiments on CUB and mini-ImageNet datasets in 1-shot-5-way and 5-shot-5-way scenarios.
arXiv Detail & Related papers (2020-09-09T13:28:07Z) - Provably Robust Metric Learning [98.50580215125142]
We show that existing metric learning algorithms can result in metrics that are less robust than the Euclidean distance.
We propose a novel metric learning algorithm to find a Mahalanobis distance that is robust against adversarial perturbations.
Experimental results show that the proposed metric learning algorithm improves both certified robust errors and empirical robust errors.
arXiv Detail & Related papers (2020-06-12T09:17:08Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z) - CONSAC: Robust Multi-Model Fitting by Conditional Sample Consensus [62.86856923633923]
We present a robust estimator for fitting multiple parametric models of the same form to noisy measurements.
In contrast to previous works, which resorted to hand-crafted search strategies for multiple model detection, we learn the search strategy from data.
For self-supervised learning of the search, we evaluate the proposed algorithm on multi-homography estimation and demonstrate an accuracy that is superior to state-of-the-art methods.
arXiv Detail & Related papers (2020-01-08T17:37:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.