Semi-supervised learning method based on predefined evenly-distributed
class centroids
- URL: http://arxiv.org/abs/2001.04092v1
- Date: Mon, 13 Jan 2020 08:03:32 GMT
- Title: Semi-supervised learning method based on predefined evenly-distributed
class centroids
- Authors: Qiuyu Zhu and Tiantian Li
- Abstract summary: We use a small number of labeled samples and perform data augmentation on unlabeled samples to achieve image classification.
Our semi-supervised learning method achieves the state-of-the-art results, with 4000 labeled samples on CIFAR10 and 1000 labeled samples on SVHN, and the accuracy is 95.10% and 97.58% respectively.
- Score: 7.499563097360385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compared to supervised learning, semi-supervised learning reduces the
dependence of deep learning on a large number of labeled samples. In this work,
we use a small number of labeled samples and perform data augmentation on
unlabeled samples to achieve image classification. Our method constrains all
samples to the predefined evenly-distributed class centroids (PEDCC) by the
corresponding loss function. Specifically, the PEDCC-Loss for labeled samples,
and the maximum mean discrepancy loss for unlabeled samples are used to make
the feature distribution closer to the distribution of PEDCC. Our method
ensures that the inter-class distance is large and the intra-class distance is
small enough to make the classification boundaries between different classes
clearer. Meanwhile, for unlabeled samples, we also use KL divergence to
constrain the consistency of the network predictions between unlabeled and
augmented samples. Our semi-supervised learning method achieves the
state-of-the-art results, with 4000 labeled samples on CIFAR10 and 1000 labeled
samples on SVHN, and the accuracy is 95.10% and 97.58% respectively.
Related papers
- Pairwise Similarity Distribution Clustering for Noisy Label Learning [0.0]
Noisy label learning aims to train deep neural networks using a large amount of samples with noisy labels.
We propose a simple yet effective sample selection algorithm to divide the training samples into one clean set and another noisy set.
Experimental results on various benchmark datasets, such as CIFAR-10, CIFAR-100 and Clothing1M, demonstrate significant improvements over state-of-the-art methods.
arXiv Detail & Related papers (2024-04-02T11:30:22Z) - VLM-CPL: Consensus Pseudo Labels from Vision-Language Models for Human Annotation-Free Pathological Image Classification [23.08368823707528]
We present a novel human annotation-free method for pathology image classification by leveraging pre-trained Vision-Language Models (VLMs)
We introduce VLM-CPL, a novel approach based on consensus pseudo labels that integrates two noisy label filtering techniques with a semi-supervised learning strategy.
Experimental results showed that our method obtained an accuracy of 87.1% and 95.1% on the HPH and LC25K datasets, respectively.
arXiv Detail & Related papers (2024-03-23T13:24:30Z) - Twice Class Bias Correction for Imbalanced Semi-Supervised Learning [59.90429949214134]
We introduce a novel approach called textbfTwice textbfClass textbfBias textbfCorrection (textbfTCBC)
We estimate the class bias of the model parameters during the training process.
We apply a secondary correction to the model's pseudo-labels for unlabeled samples.
arXiv Detail & Related papers (2023-12-27T15:06:36Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Shrinking Class Space for Enhanced Certainty in Semi-Supervised Learning [59.44422468242455]
We propose a novel method dubbed ShrinkMatch to learn uncertain samples.
For each uncertain sample, it adaptively seeks a shrunk class space, which merely contains the original top-1 class.
We then impose a consistency regularization between a pair of strongly and weakly augmented samples in the shrunk space to strive for discriminative representations.
arXiv Detail & Related papers (2023-08-13T14:05:24Z) - Deep Metric Learning Assisted by Intra-variance in A Semi-supervised
View of Learning [0.0]
Deep metric learning aims to construct an embedding space where samples of the same class are close to each other, while samples of different classes are far away from each other.
This paper designs a self-supervised generative assisted ranking framework that provides a semi-supervised view of intra-class variance learning scheme for typical supervised deep metric learning.
arXiv Detail & Related papers (2023-04-21T13:30:32Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Multi-Class Data Description for Out-of-distribution Detection [25.853322158250435]
Deep-MCDD is effective to detect out-of-distribution (OOD) samples as well as classify in-distribution (ID) samples.
By integrating the concept of Gaussian discriminant analysis into deep neural networks, we propose a deep learning objective to learn class-conditional distributions.
arXiv Detail & Related papers (2021-04-02T08:41:51Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Binary classification with ambiguous training data [69.50862982117127]
In supervised learning, we often face with ambiguous (A) samples that are difficult to label even by domain experts.
This problem is substantially different from semi-supervised learning since unlabeled samples are not necessarily difficult samples.
arXiv Detail & Related papers (2020-11-05T00:53:58Z) - Curriculum Labeling: Revisiting Pseudo-Labeling for Semi-Supervised
Learning [27.258077365554474]
We revisit the idea of pseudo-labeling in the context of semi-supervised learning.
Pseudo-labeling works by applying pseudo-labels to samples in the unlabeled set.
We obtain 94.91% accuracy on CIFAR-10 using only 4,000 labeled samples, and 68.87% top-1 accuracy on Imagenet-ILSVRC using only 10% of the labeled samples.
arXiv Detail & Related papers (2020-01-16T03:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.