Positive semidefinite support vector regression metric learning
- URL: http://arxiv.org/abs/2008.07739v1
- Date: Tue, 18 Aug 2020 04:45:59 GMT
- Title: Positive semidefinite support vector regression metric learning
- Authors: Lifeng Gu
- Abstract summary: RAML framework is proposed to handle the metric learning problem in those scenarios.
It can't learn positive semidefinite distance metric which is necessary in metric learning.
We propose two methds to overcame the weakness.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing metric learning methods focus on learning a similarity or
distance measure relying on similar and dissimilar relations between sample
pairs. However, pairs of samples cannot be simply identified as similar or
dissimilar in many real-world applications, e.g., multi-label learning, label
distribution learning. To this end, relation alignment metric learning (RAML)
framework is proposed to handle the metric learning problem in those scenarios.
But RAML framework uses SVR solvers for optimization. It can't learn positive
semidefinite distance metric which is necessary in metric learning. In this
paper, we propose two methds to overcame the weakness. Further, We carry out
several experiments on the single-label classification, multi-label
classification, label distribution learning to demonstrate the new methods
achieves favorable performance against RAML framework.
Related papers
- Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Multiple Instance Learning via Iterative Self-Paced Supervised
Contrastive Learning [22.07044031105496]
Learning representations for individual instances when only bag-level labels are available is a challenge in multiple instance learning (MIL)
We propose a novel framework, Iterative Self-paced Supervised Contrastive Learning for MIL Representations (ItS2CLR)
It improves the learned representation by exploiting instance-level pseudo labels derived from the bag-level labels.
arXiv Detail & Related papers (2022-10-17T21:43:32Z) - Adaptive neighborhood Metric learning [184.95321334661898]
We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
arXiv Detail & Related papers (2022-01-20T17:26:37Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Deep Relational Metric Learning [84.95793654872399]
This paper presents a deep relational metric learning framework for image clustering and retrieval.
We learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions.
Experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results.
arXiv Detail & Related papers (2021-08-23T09:31:18Z) - Semi-Supervised Metric Learning: A Deep Resurrection [22.918651280720855]
Semi-Supervised DML (SSDML) tries to learn a metric using a few labeled examples, and abundantly available unlabeled examples.
We propose a graph-based approach that first propagates the affinities between the pairs of examples.
We impose Metricity constraint on the metric parameters, as it leads to a better performance.
arXiv Detail & Related papers (2021-05-10T12:28:45Z) - Hierarchical Relationship Alignment Metric Learning [0.0]
We propose a hierarchical relationship alignment metric leaning model HRAML, which uses the concept of relationship alignment to model metric learning problems.
We organize several experiment divided by learning tasks, and verified the better performance of HRAML against many popular methods and RAML framework.
arXiv Detail & Related papers (2021-03-28T11:10:24Z) - Unsupervised Deep Metric Learning via Orthogonality based Probabilistic
Loss [27.955068939695042]
Existing state-of-the-art metric learning approaches require class labels to learn a metric.
We propose an unsupervised approach that learns a metric without making use of class labels.
The pseudo-labels are used to form triplets of examples, which guide the metric learning.
arXiv Detail & Related papers (2020-08-22T17:13:33Z) - Online Metric Learning for Multi-Label Classification [22.484707213499714]
We propose a novel online metric learning paradigm for multi-label classification.
We first propose a new metric for multi-label classification based on $k$-Nearest Neighbour ($k$NN)
arXiv Detail & Related papers (2020-06-12T11:33:04Z) - Boosting Few-Shot Learning With Adaptive Margin Loss [109.03665126222619]
This paper proposes an adaptive margin principle to improve the generalization ability of metric-based meta-learning approaches for few-shot learning problems.
Extensive experiments demonstrate that the proposed method can boost the performance of current metric-based meta-learning approaches.
arXiv Detail & Related papers (2020-05-28T07:58:41Z) - Memory-Augmented Relation Network for Few-Shot Learning [114.47866281436829]
In this work, we investigate a new metric-learning method, Memory-Augmented Relation Network (MRN)
In MRN, we choose the samples that are visually similar from the working context, and perform weighted information propagation to attentively aggregate helpful information from chosen ones to enhance its representation.
We empirically demonstrate that MRN yields significant improvement over its ancestor and achieves competitive or even better performance when compared with other few-shot learning approaches.
arXiv Detail & Related papers (2020-05-09T10:09:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.