Co-supervised learning paradigm with conditional generative adversarial
networks for sample-efficient classification
- URL: http://arxiv.org/abs/2212.13589v1
- Date: Tue, 27 Dec 2022 19:24:31 GMT
- Title: Co-supervised learning paradigm with conditional generative adversarial
networks for sample-efficient classification
- Authors: Hao Zhen, Yucheng Shi, Jidong J. Yang, and Javad Mohammadpour Vehni
- Abstract summary: This paper introduces a sample-efficient co-supervised learning paradigm (SEC-CGAN)
SEC-CGAN is trained alongside the classifier and supplements semantics-conditioned, confidence-aware synthesized examples to the annotated data during the training process.
Experiments demonstrate that the proposed SEC-CGAN outperforms the external classifier GAN and a baseline ResNet-18 classifier.
- Score: 8.27719348049333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classification using supervised learning requires annotating a large amount
of classes-balanced data for model training and testing. This has practically
limited the scope of applications with supervised learning, in particular deep
learning. To address the issues associated with limited and imbalanced data,
this paper introduces a sample-efficient co-supervised learning paradigm
(SEC-CGAN), in which a conditional generative adversarial network (CGAN) is
trained alongside the classifier and supplements semantics-conditioned,
confidence-aware synthesized examples to the annotated data during the training
process. In this setting, the CGAN not only serves as a co-supervisor but also
provides complementary quality examples to aid the classifier training in an
end-to-end fashion. Experiments demonstrate that the proposed SEC-CGAN
outperforms the external classifier GAN (EC-GAN) and a baseline ResNet-18
classifier. For the comparison, all classifiers in above methods adopt the
ResNet-18 architecture as the backbone. Particularly, for the Street View House
Numbers dataset, using the 5% of training data, a test accuracy of 90.26% is
achieved by SEC-CGAN as opposed to 88.59% by EC-GAN and 87.17% by the baseline
classifier; for the highway image dataset, using the 10% of training data, a
test accuracy of 98.27% is achieved by SEC-CGAN, compared to 97.84% by EC-GAN
and 95.52% by the baseline classifier.
Related papers
- Self-Supervised Learning Based Handwriting Verification [23.983430206133793]
We show that ResNet based Variational Auto-Encoder (VAE) outperforms other generative approaches achieving 76.3% accuracy.
Using a pre-trained VAE and VICReg for the downstream task of writer verification we observed a relative improvement in accuracy of 6.7% and 9% over ResNet-18 supervised baseline with 10% writer labels.
arXiv Detail & Related papers (2024-05-28T16:11:11Z) - Semi-Supervised SAR ATR Framework with Transductive Auxiliary
Segmentation [16.65792542181861]
We propose a Semi-supervised SAR Framework with transductive Auxiliary ATR (SFAS)
SFAS focuses on exploiting the transductive generalization on available unlabeled samples with an auxiliary loss serving as a regularizer.
The recognition performance of 94.18% can be achieved under 20 training samples in each class with simultaneous accurate segmentation results.
arXiv Detail & Related papers (2023-08-31T11:00:05Z) - Deep Clustering with Features from Self-Supervised Pretraining [16.023354174462774]
A deep clustering model conceptually consists of a feature extractor that maps data points to a latent space, and a clustering head that groups data points into clusters in the latent space.
In the first stage, the feature extractor is trained via self-supervised learning, which enables the preservation of the cluster structures among the data points.
We propose to replace the first stage with another model that is pretrained on a much larger dataset via self-supervised learning.
arXiv Detail & Related papers (2022-07-27T08:38:45Z) - Class-Aware Contrastive Semi-Supervised Learning [51.205844705156046]
We propose a general method named Class-aware Contrastive Semi-Supervised Learning (CCSSL) to improve pseudo-label quality and enhance the model's robustness in the real-world setting.
Our proposed CCSSL has significant performance improvements over the state-of-the-art SSL methods on the standard datasets CIFAR100 and STL10.
arXiv Detail & Related papers (2022-03-04T12:18:23Z) - LGD: Label-guided Self-distillation for Object Detection [59.9972914042281]
We propose the first self-distillation framework for general object detection, termed LGD (Label-Guided self-Distillation)
Our framework involves sparse label-appearance encoding, inter-object relation adaptation and intra-object knowledge mapping to obtain the instructive knowledge.
Compared with a classical teacher-based method FGFI, LGD not only performs better without requiring pretrained teacher but also with 51% lower training cost beyond inherent student learning.
arXiv Detail & Related papers (2021-09-23T16:55:01Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Common Spatial Generative Adversarial Networks based EEG Data
Augmentation for Cross-Subject Brain-Computer Interface [4.8276709243429]
Cross-subject application of EEG-based brain-computer interface (BCI) has always been limited by large individual difference and complex characteristics that are difficult to perceive.
We propose a cross-subject EEG classification framework with a generative adversarial networks (GANs) based method named common spatial GAN (CS-GAN)
Our framework provides a promising way to deal with the cross-subject problem and promote the practical application of BCI.
arXiv Detail & Related papers (2021-02-08T10:37:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.