Minimal Learning Machine for Multi-Label Learning
- URL: http://arxiv.org/abs/2305.05518v1
- Date: Tue, 9 May 2023 15:16:50 GMT
- Title: Minimal Learning Machine for Multi-Label Learning
- Authors: Joonas H\"am\"al\"ainen, Amauri Souza, C\'esar L. C. Mattos, Jo\~ao P.
P. Gomes, Tommi K\"arkk\"ainen
- Abstract summary: Distance-based supervised method, the minimal learning machine, constructs a predictive model from data.
In this paper, we evaluate how this technique and its core component, the distance mapping, can be adapted to multi-label learning.
The proposed approach is based on combining the distance mapping with an inverse distance weighting.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Distance-based supervised method, the minimal learning machine, constructs a
predictive model from data by learning a mapping between input and output
distance matrices. In this paper, we propose methods and evaluate how this
technique and its core component, the distance mapping, can be adapted to
multi-label learning. The proposed approach is based on combining the distance
mapping with an inverse distance weighting. Although the proposal is one of the
simplest methods in the multi-label learning literature, it achieves
state-of-the-art performance for small to moderate-sized multi-label learning
problems. Besides its simplicity, the proposed method is fully deterministic
and its hyper-parameter can be selected via ranking loss-based statistic which
has a closed form, thus avoiding conventional cross-validation-based
hyper-parameter tuning. In addition, due to its simple linear distance
mapping-based construction, we demonstrate that the proposed method can assess
predictions' uncertainty for multi-label classification, which is a valuable
capability for data-centric machine learning pipelines.
Related papers
- Dual-Decoupling Learning and Metric-Adaptive Thresholding for Semi-Supervised Multi-Label Learning [81.83013974171364]
Semi-supervised multi-label learning (SSMLL) is a powerful framework for leveraging unlabeled data to reduce the expensive cost of collecting precise multi-label annotations.
Unlike semi-supervised learning, one cannot select the most probable label as the pseudo-label in SSMLL due to multiple semantics contained in an instance.
We propose a dual-perspective method to generate high-quality pseudo-labels.
arXiv Detail & Related papers (2024-07-26T09:33:53Z) - Querying Easily Flip-flopped Samples for Deep Active Learning [63.62397322172216]
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
One effective selection strategy is to base it on the model's predictive uncertainty, which can be interpreted as a measure of how informative a sample is.
This paper proposes the it least disagree metric (LDM) as the smallest probability of disagreement of the predicted label.
arXiv Detail & Related papers (2024-01-18T08:12:23Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Unsupervised Visual Representation Learning via Mutual Information
Regularized Assignment [31.00769817116771]
We propose a pseudo-labeling algorithm for unsupervised representation learning inspired by information.
MIRA achieves state-of-the-art performance on various downstream tasks, including the linear/k-NN evaluation and transfer learning.
arXiv Detail & Related papers (2022-11-04T06:49:42Z) - An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning [58.59343434538218]
We propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective.
Our approach can be implemented in just few lines of code by only using off-the-shelf operations.
arXiv Detail & Related papers (2022-09-28T02:11:34Z) - Bayesian Evidential Learning for Few-Shot Classification [22.46281648187903]
Few-Shot Classification aims to generalize from base classes to novel classes given very limited labeled samples.
State-of-the-art solutions involve learning to find a good metric and representation space to compute the distance between samples.
Despite the promising accuracy performance, how to model uncertainty for metric-based FSC methods effectively is still a challenge.
arXiv Detail & Related papers (2022-07-19T03:58:00Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Proxy Network for Few Shot Learning [9.529264466445236]
We propose a few-shot learning algorithm called proxy network under the architecture of meta-learning.
We conduct experiments on CUB and mini-ImageNet datasets in 1-shot-5-way and 5-shot-5-way scenarios.
arXiv Detail & Related papers (2020-09-09T13:28:07Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.