Multi-class Generative Adversarial Nets for Semi-supervised Image
Classification
- URL: http://arxiv.org/abs/2102.06944v1
- Date: Sat, 13 Feb 2021 15:26:17 GMT
- Title: Multi-class Generative Adversarial Nets for Semi-supervised Image
Classification
- Authors: Saman Motamed and Farzad Khalvati
- Abstract summary: We show how similar images cause the GAN to generalize, leading to the poor classification of images.
We propose a modification to the traditional training of GANs that allows for improved multi-class classification in similar classes of images in a semi-supervised learning framework.
- Score: 0.17404865362620794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: From generating never-before-seen images to domain adaptation, applications
of Generative Adversarial Networks (GANs) spread wide in the domain of vision
and graphics problems. With the remarkable ability of GANs in learning the
distribution and generating images of a particular class, they can be used for
semi-supervised classification tasks. However, the problem is that if two
classes of images share similar characteristics, the GAN might learn to
generalize and hinder the classification of the two classes. In this paper, we
use various images from MNIST and Fashion-MNIST datasets to illustrate how
similar images cause the GAN to generalize, leading to the poor classification
of images. We propose a modification to the traditional training of GANs that
allows for improved multi-class classification in similar classes of images in
a semi-supervised learning framework.
Related papers
- Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Using a Conditional Generative Adversarial Network to Control the
Statistical Characteristics of Generated Images for IACT Data Analysis [55.41644538483948]
We divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images.
In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size)
We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment.
arXiv Detail & Related papers (2022-11-28T22:30:33Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Polymorphic-GAN: Generating Aligned Samples across Multiple Domains with
Learned Morph Maps [94.10535575563092]
We introduce a generative adversarial network that can simultaneously generate aligned image samples from multiple related domains.
We propose Polymorphic-GAN which learns shared features across all domains and a per-domain morph layer to morph shared features according to each domain.
arXiv Detail & Related papers (2022-06-06T21:03:02Z) - Attribute Group Editing for Reliable Few-shot Image Generation [85.52840521454411]
We propose a new editing-based method, i.e., Attribute Group Editing (AGE), for few-shot image generation.
AGE examines the internal representation learned in GANs and identifies semantically meaningful directions.
arXiv Detail & Related papers (2022-03-16T06:54:09Z) - Local and Global GANs with Semantic-Aware Upsampling for Image
Generation [201.39323496042527]
We consider generating images using local context.
We propose a class-specific generative network using semantic maps as guidance.
Lastly, we propose a novel semantic-aware upsampling method.
arXiv Detail & Related papers (2022-02-28T19:24:25Z) - Vanishing Twin GAN: How training a weak Generative Adversarial Network
can improve semi-supervised image classification [0.17404865362620794]
Generative Adversarial Networks can learn the mapping of random noise to realistic images in a semi-supervised framework.
If an unknown class shares similar characteristics to the known class(es), GANs can learn to generalize and generate images that look like both classes.
By training a weak GAN and using its generated output image parallel to the regular GAN, the Vanishing Twin training improves semi-supervised image classification where image similarity can hurt classification tasks.
arXiv Detail & Related papers (2021-03-03T16:08:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.