Vanishing Twin GAN: How training a weak Generative Adversarial Network
can improve semi-supervised image classification
- URL: http://arxiv.org/abs/2103.02496v1
- Date: Wed, 3 Mar 2021 16:08:27 GMT
- Title: Vanishing Twin GAN: How training a weak Generative Adversarial Network
can improve semi-supervised image classification
- Authors: Saman Motamed and Farzad Khalvati
- Abstract summary: Generative Adversarial Networks can learn the mapping of random noise to realistic images in a semi-supervised framework.
If an unknown class shares similar characteristics to the known class(es), GANs can learn to generalize and generate images that look like both classes.
By training a weak GAN and using its generated output image parallel to the regular GAN, the Vanishing Twin training improves semi-supervised image classification where image similarity can hurt classification tasks.
- Score: 0.17404865362620794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks can learn the mapping of random noise to
realistic images in a semi-supervised framework. This mapping ability can be
used for semi-supervised image classification to detect images of an unknown
class where there is no training data to be used for supervised classification.
However, if the unknown class shares similar characteristics to the known
class(es), GANs can learn to generalize and generate images that look like both
classes. This generalization ability can hinder the classification performance.
In this work, we propose the Vanishing Twin GAN. By training a weak GAN and
using its generated output image parallel to the regular GAN, the Vanishing
Twin training improves semi-supervised image classification where image
similarity can hurt classification tasks.
Related papers
- Image-free Classifier Injection for Zero-Shot Classification [72.66409483088995]
Zero-shot learning models achieve remarkable results on image classification for samples from classes that were not seen during training.
We aim to equip pre-trained models with zero-shot classification capabilities without the use of image data.
We achieve this with our proposed Image-free Injection with Semantics (ICIS)
arXiv Detail & Related papers (2023-08-21T09:56:48Z) - Using a Conditional Generative Adversarial Network to Control the
Statistical Characteristics of Generated Images for IACT Data Analysis [55.41644538483948]
We divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images.
In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size)
We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment.
arXiv Detail & Related papers (2022-11-28T22:30:33Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Attribute Group Editing for Reliable Few-shot Image Generation [85.52840521454411]
We propose a new editing-based method, i.e., Attribute Group Editing (AGE), for few-shot image generation.
AGE examines the internal representation learned in GANs and identifies semantically meaningful directions.
arXiv Detail & Related papers (2022-03-16T06:54:09Z) - Match What Matters: Generative Implicit Feature Replay for Continual
Learning [0.0]
We propose GenIFeR (Generative Implicit Feature Replay) for class-incremental learning.
The main idea is to train a generative adversarial network (GAN) to generate images that contain realistic features.
We empirically show that GenIFeR is superior to both conventional generative image and feature replay.
arXiv Detail & Related papers (2021-06-09T19:29:41Z) - GAN for Vision, KG for Relation: a Two-stage Deep Network for Zero-shot
Action Recognition [33.23662792742078]
We propose a two-stage deep neural network for zero-shot action recognition.
In the sampling stage, we utilize a generative adversarial networks (GAN) trained by action features and word vectors of seen classes.
In the classification stage, we construct a knowledge graph based on the relationship between word vectors of action classes and related objects.
arXiv Detail & Related papers (2021-05-25T09:34:42Z) - Multi-class Generative Adversarial Nets for Semi-supervised Image
Classification [0.17404865362620794]
We show how similar images cause the GAN to generalize, leading to the poor classification of images.
We propose a modification to the traditional training of GANs that allows for improved multi-class classification in similar classes of images in a semi-supervised learning framework.
arXiv Detail & Related papers (2021-02-13T15:26:17Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Improving Explainability of Image Classification in Scenarios with Class
Overlap: Application to COVID-19 and Pneumonia [7.372797734096181]
Trust in predictions made by machine learning models is increased if the model generalizes well on previously unseen samples.
We propose a method that enhances the explainability of image classifications through better localization by mitigating the model uncertainty induced by class overlap.
Our method is particularly promising in real-world class overlap scenarios, such as COVID-19 and pneumonia, where expertly labeled data for localization is not readily available.
arXiv Detail & Related papers (2020-08-06T20:47:36Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.