GANs for learning from very high class conditional noisy labels
- URL: http://arxiv.org/abs/2010.09577v1
- Date: Mon, 19 Oct 2020 15:01:11 GMT
- Title: GANs for learning from very high class conditional noisy labels
- Authors: Sandhya Tripathi and N Hemachandra
- Abstract summary: We use Generative Adversarial Networks (GANs) to design a class conditional label noise (CCN) robust scheme for binary classification.
It first generates a set of correctly labelled data points from noisy labelled data and 0.1% or 1% clean labels.
- Score: 1.6516902135723865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We use Generative Adversarial Networks (GANs) to design a class conditional
label noise (CCN) robust scheme for binary classification. It first generates a
set of correctly labelled data points from noisy labelled data and 0.1% or 1%
clean labels such that the generated and true (clean) labelled data
distributions are close; generated labelled data is used to learn a good
classifier. The mode collapse problem while generating correct feature-label
pairs and the problem of skewed feature-label dimension ratio ($\sim$ 784:1)
are avoided by using Wasserstein GAN and a simple data representation change.
Another WGAN with information-theoretic flavour on top of the new
representation is also proposed. The major advantage of both schemes is their
significant improvement over the existing ones in presence of very high CCN
rates, without either estimating or cross-validating over the noise rates. We
proved that KL divergence between clean and noisy distribution increases w.r.t.
noise rates in symmetric label noise model; can be extended to high CCN rates.
This implies that our schemes perform well due to the adversarial nature of
GANs. Further, use of generative approach (learning clean joint distribution)
while handling noise enables our schemes to perform better than discriminative
approaches like GLC, LDMI and GCE; even when the classes are highly imbalanced.
Using Friedman F test and Nemenyi posthoc test, we showed that on high
dimensional binary class synthetic, MNIST and Fashion MNIST datasets, our
schemes outperform the existing methods and demonstrate consistent performance
across noise rates.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.