Semi-supervised dictionary learning with graph regularization and active
points
- URL: http://arxiv.org/abs/2009.05964v1
- Date: Sun, 13 Sep 2020 09:24:51 GMT
- Title: Semi-supervised dictionary learning with graph regularization and active
points
- Authors: Khanh-Hung Tran, Fred-Maurice Ngole-Mboula, Jean-Luc Starck and
Vincent Prost
- Abstract summary: We propose a new semi-supervised dictionary learning method based on two pillars.
On one hand, we enforce manifold structure preservation from the original data into sparse code space using Locally Linear Embedding.
On the other hand, we train a semi-supervised classifier in sparse code space.
- Score: 0.19947949439280027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised Dictionary Learning has gained much interest in the recent decade
and has shown significant performance improvements in image classification.
However, in general, supervised learning needs a large number of labelled
samples per class to achieve an acceptable result. In order to deal with
databases which have just a few labelled samples per class, semi-supervised
learning, which also exploits unlabelled samples in training phase is used.
Indeed, unlabelled samples can help to regularize the learning model, yielding
an improvement of classification accuracy. In this paper, we propose a new
semi-supervised dictionary learning method based on two pillars: on one hand,
we enforce manifold structure preservation from the original data into sparse
code space using Locally Linear Embedding, which can be considered a
regularization of sparse code; on the other hand, we train a semi-supervised
classifier in sparse code space. We show that our approach provides an
improvement over state-of-the-art semi-supervised dictionary learning methods.
Related papers
- Graph-Based Semi-Supervised Segregated Lipschitz Learning [0.21847754147782888]
This paper presents an approach to semi-supervised learning for the classification of data using the Lipschitz Learning on graphs.
We develop a graph-based semi-supervised learning framework that leverages the properties of the infinity Laplacian to propagate labels in a dataset where only a few samples are labeled.
arXiv Detail & Related papers (2024-11-05T17:16:56Z) - Co-training for Low Resource Scientific Natural Language Inference [65.37685198688538]
We propose a novel co-training method that assigns weights based on the training dynamics of the classifiers to the distantly supervised labels.
By assigning importance weights instead of filtering out examples based on an arbitrary threshold on the predicted confidence, we maximize the usage of automatically labeled data.
The proposed method obtains an improvement of 1.5% in Macro F1 over the distant supervision baseline, and substantial improvements over several other strong SSL baselines.
arXiv Detail & Related papers (2024-06-20T18:35:47Z) - Self-Training for Sample-Efficient Active Learning for Text Classification with Pre-Trained Language Models [3.546617486894182]
We introduce HAST, a new and effective self-training strategy, which is evaluated on four text classification benchmarks.
Results show that it outperforms the reproduced self-training approaches and reaches classification results comparable to previous experiments for three out of four datasets.
arXiv Detail & Related papers (2024-06-13T15:06:11Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - One-bit Supervision for Image Classification: Problem, Solution, and
Beyond [114.95815360508395]
This paper presents one-bit supervision, a novel setting of learning with fewer labels, for image classification.
We propose a multi-stage training paradigm and incorporate negative label suppression into an off-the-shelf semi-supervised learning algorithm.
In multiple benchmarks, the learning efficiency of the proposed approach surpasses that using full-bit, semi-supervised supervision.
arXiv Detail & Related papers (2023-11-26T07:39:00Z) - Self-Training: A Survey [5.772546394254112]
Semi-supervised algorithms aim to learn prediction functions from a small set of labeled observations and a large set of unlabeled observations.
Among the existing techniques, self-training methods have undoubtedly attracted greater attention in recent years.
We present self-training methods for binary and multi-class classification; as well as their variants and two related approaches.
arXiv Detail & Related papers (2022-02-24T11:40:44Z) - Learning with Neighbor Consistency for Noisy Labels [69.83857578836769]
We present a method for learning from noisy labels that leverages similarities between training examples in feature space.
We evaluate our method on datasets evaluating both synthetic (CIFAR-10, CIFAR-100) and realistic (mini-WebVision, Clothing1M, mini-ImageNet-Red) noise.
arXiv Detail & Related papers (2022-02-04T15:46:27Z) - Prototypical Classifier for Robust Class-Imbalanced Learning [64.96088324684683]
We propose textitPrototypical, which does not require fitting additional parameters given the embedding network.
Prototypical produces balanced and comparable predictions for all classes even though the training set is class-imbalanced.
We test our method on CIFAR-10LT, CIFAR-100LT and Webvision datasets, observing that Prototypical obtains substaintial improvements compared with state of the arts.
arXiv Detail & Related papers (2021-10-22T01:55:01Z) - Model-Change Active Learning in Graph-Based Semi-Supervised Learning [7.208515071018781]
"Model-change" active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s)
We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution.
arXiv Detail & Related papers (2021-10-14T21:47:10Z) - Semi-Supervised Learning using Siamese Networks [3.492636597449942]
This work explores a new training method for semi-supervised learning that is based on similarity function learning using a Siamese network.
Confident predictions of unlabeled instances are used as true labels for retraining the Siamese network.
For improving unlabeled predictions, local learning with global consistency is also evaluated.
arXiv Detail & Related papers (2021-09-02T09:06:35Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.