GPU-based Self-Organizing Maps for Post-Labeled Few-Shot Unsupervised
Learning
- URL: http://arxiv.org/abs/2009.03665v1
- Date: Fri, 4 Sep 2020 13:22:28 GMT
- Title: GPU-based Self-Organizing Maps for Post-Labeled Few-Shot Unsupervised
Learning
- Authors: Lyes Khacef, Vincent Gripon, Benoit Miramond
- Abstract summary: Few-shot classification is a challenge in machine learning where the goal is to train a classifier using a very limited number of labeled examples.
We consider the problem of post-labeled few-shot unsupervised learning, a classification task where representations are learned in an unsupervised fashion, to be later labeled using very few annotated examples.
- Score: 2.922007656878633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Few-shot classification is a challenge in machine learning where the goal is
to train a classifier using a very limited number of labeled examples. This
scenario is likely to occur frequently in real life, for example when data
acquisition or labeling is expensive. In this work, we consider the problem of
post-labeled few-shot unsupervised learning, a classification task where
representations are learned in an unsupervised fashion, to be later labeled
using very few annotated examples. We argue that this problem is very likely to
occur on the edge, when the embedded device directly acquires the data, and the
expert needed to perform labeling cannot be prompted often. To address this
problem, we consider an algorithm consisting of the concatenation of transfer
learning with clustering using Self-Organizing Maps (SOMs). We introduce a
TensorFlow-based implementation to speed-up the process in multi-core CPUs and
GPUs. Finally, we demonstrate the effectiveness of the method using standard
off-the-shelf few-shot classification benchmarks.
Related papers
- Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Rethinking Multiple Instance Learning for Whole Slide Image Classification: A Good Instance Classifier is All You Need [18.832471712088353]
We propose an instance-level weakly supervised contrastive learning algorithm for the first time under the MIL setting.
We also propose an accurate pseudo label generation method through prototype learning.
arXiv Detail & Related papers (2023-07-05T12:44:52Z) - Semi-Supervised Cascaded Clustering for Classification of Noisy Label
Data [0.3441021278275805]
The performance of supervised classification techniques often deteriorates when the data has noisy labels.
Most of the approaches addressing the noisy label data rely on deep neural networks (DNN) that require huge datasets for classification tasks.
We propose a semi-supervised cascaded clustering algorithm to extract patterns and generate a cascaded tree of classes in such datasets.
arXiv Detail & Related papers (2022-05-04T17:42:22Z) - Weakly Supervised Semantic Segmentation using Out-of-Distribution Data [50.45689349004041]
Weakly supervised semantic segmentation (WSSS) methods are often built on pixel-level localization maps.
We propose a novel source of information to distinguish foreground from the background: Out-of-Distribution (OoD) data.
arXiv Detail & Related papers (2022-03-08T05:33:35Z) - Learning to Detect Instance-level Salient Objects Using Complementary
Image Labels [55.049347205603304]
We present the first weakly-supervised approach to the salient instance detection problem.
We propose a novel weakly-supervised network with three branches: a Saliency Detection Branch leveraging class consistency information to locate candidate objects; a Boundary Detection Branch exploiting class discrepancy information to delineate object boundaries; and a Centroid Detection Branch using subitizing information to detect salient instance centroids.
arXiv Detail & Related papers (2021-11-19T10:15:22Z) - A Closer Look at Self-training for Zero-Label Semantic Segmentation [53.4488444382874]
Being able to segment unseen classes not observed during training is an important technical challenge in deep learning.
Prior zero-label semantic segmentation works approach this task by learning visual-semantic embeddings or generative models.
We propose a consistency regularizer to filter out noisy pseudo-labels by taking the intersections of the pseudo-labels generated from different augmentations of the same image.
arXiv Detail & Related papers (2021-04-21T14:34:33Z) - Boosting the Performance of Semi-Supervised Learning with Unsupervised
Clustering [10.033658645311188]
We show that ignoring labels altogether for whole epochs intermittently during training can significantly improve performance in the small sample regime.
We demonstrate our method's efficacy in boosting several state-of-the-art SSL algorithms.
arXiv Detail & Related papers (2020-12-01T14:19:14Z) - Deep Categorization with Semi-Supervised Self-Organizing Maps [0.0]
This article presents a semi-supervised model, called Batch Semi-Supervised Self-Organizing Map (Batch SS-SOM)
The results show that Batch SS-SOM is a good option for semi-supervised classification and clustering.
It performs well in terms of accuracy and clustering error, even with a small number of labeled samples.
arXiv Detail & Related papers (2020-06-17T22:00:04Z) - Unsupervised Person Re-identification via Softened Similarity Learning [122.70472387837542]
Person re-identification (re-ID) is an important topic in computer vision.
This paper studies the unsupervised setting of re-ID, which does not require any labeled information.
Experiments on two image-based and video-based datasets demonstrate state-of-the-art performance.
arXiv Detail & Related papers (2020-04-07T17:16:41Z) - Structured Prediction with Partial Labelling through the Infimum Loss [85.4940853372503]
The goal of weak supervision is to enable models to learn using only forms of labelling which are cheaper to collect.
This is a type of incomplete annotation where, for each datapoint, supervision is cast as a set of labels containing the real one.
This paper provides a unified framework based on structured prediction and on the concept of infimum loss to deal with partial labelling.
arXiv Detail & Related papers (2020-03-02T13:59:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.