Multiclass non-Adversarial Image Synthesis, with Application to
Classification from Very Small Sample
- URL: http://arxiv.org/abs/2011.12942v2
- Date: Tue, 1 Dec 2020 10:29:21 GMT
- Title: Multiclass non-Adversarial Image Synthesis, with Application to
Classification from Very Small Sample
- Authors: Itamar Winter, Daphna Weinshall
- Abstract summary: We present a novel non-adversarial generative method - Clustered Optimization of LAtent space (COLA)
In the full data regime, our method is capable of generating diverse multi-class images with no supervision.
In the small-data regime, where only a small sample of labeled images is available for training with no access to additional unlabeled data, our results surpass state-of-the-art GAN models trained on the same amount of data.
- Score: 6.243995448840211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The generation of synthetic images is currently being dominated by Generative
Adversarial Networks (GANs). Despite their outstanding success in generating
realistic looking images, they still suffer from major drawbacks, including an
unstable and highly sensitive training procedure, mode-collapse and
mode-mixture, and dependency on large training sets. In this work we present a
novel non-adversarial generative method - Clustered Optimization of LAtent
space (COLA), which overcomes some of the limitations of GANs, and outperforms
GANs when training data is scarce. In the full data regime, our method is
capable of generating diverse multi-class images with no supervision,
surpassing previous non-adversarial methods in terms of image quality and
diversity. In the small-data regime, where only a small sample of labeled
images is available for training with no access to additional unlabeled data,
our results surpass state-of-the-art GAN models trained on the same amount of
data. Finally, when utilizing our model to augment small datasets, we surpass
the state-of-the-art performance in small-sample classification tasks on
challenging datasets, including CIFAR-10, CIFAR-100, STL-10 and Tiny-ImageNet.
A theoretical analysis supporting the essence of the method is presented.
Related papers
- Rejection Sampling IMLE: Designing Priors for Better Few-Shot Image
Synthesis [7.234618871984921]
An emerging area of research aims to learn deep generative models with limited training data.
We propose RS-IMLE, a novel approach that changes the prior distribution used for training.
This leads to substantially higher quality image generation compared to existing GAN and IMLE-based methods.
arXiv Detail & Related papers (2024-09-26T00:19:42Z) - DataDream: Few-shot Guided Dataset Generation [90.09164461462365]
We propose a framework for synthesizing classification datasets that more faithfully represents the real data distribution.
DataDream fine-tunes LoRA weights for the image generation model on the few real images before generating the training data using the adapted model.
We then fine-tune LoRA weights for CLIP using the synthetic data to improve downstream image classification over previous approaches on a large variety of datasets.
arXiv Detail & Related papers (2024-07-15T17:10:31Z) - Additional Look into GAN-based Augmentation for Deep Learning COVID-19
Image Classification [57.1795052451257]
We study the dependence of the GAN-based augmentation performance on dataset size with a focus on small samples.
We train StyleGAN2-ADA with both sets and then, after validating the quality of generated images, we use trained GANs as one of the augmentations approaches in multi-class classification problems.
The GAN-based augmentation approach is found to be comparable with classical augmentation in the case of medium and large datasets but underperforms in the case of smaller datasets.
arXiv Detail & Related papers (2024-01-26T08:28:13Z) - Scaling Laws of Synthetic Images for Model Training ... for Now [54.43596959598466]
We study the scaling laws of synthetic images generated by state of the art text-to-image models.
We observe that synthetic images demonstrate a scaling trend similar to, but slightly less effective than, real images in CLIP training.
arXiv Detail & Related papers (2023-12-07T18:59:59Z) - Improving the Effectiveness of Deep Generative Data [5.856292656853396]
Training a model on purely synthetic images for downstream image processing tasks results in an undesired performance drop compared to training on real data.
We propose a new taxonomy to describe factors contributing to this commonly observed phenomenon and investigate it on the popular CIFAR-10 dataset.
Our method outperforms baselines on downstream classification tasks both in case of training on synthetic only (Synthetic-to-Real) and training on a mix of real and synthetic data.
arXiv Detail & Related papers (2023-11-07T12:57:58Z) - On quantifying and improving realism of images generated with diffusion [50.37578424163951]
We propose a metric, called Image Realism Score (IRS), computed from five statistical measures of a given image.
IRS is easily usable as a measure to classify a given image as real or fake.
We experimentally establish the model- and data-agnostic nature of the proposed IRS by successfully detecting fake images generated by Stable Diffusion Model (SDM), Dalle2, Midjourney and BigGAN.
Our efforts have also led to Gen-100 dataset, which provides 1,000 samples for 100 classes generated by four high-quality models.
arXiv Detail & Related papers (2023-09-26T08:32:55Z) - No Data Augmentation? Alternative Regularizations for Effective Training
on Small Datasets [0.0]
We study alternative regularization strategies to push the limits of supervised learning on small image classification datasets.
In particular, we employ a agnostic to select (semi) optimal learning rate and weight decay couples via the norm of model parameters.
We reach a test accuracy of 66.5%, on par with the best state-of-the-art methods.
arXiv Detail & Related papers (2023-09-04T16:13:59Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then
Training It Toughly [114.81028176850404]
Training generative adversarial networks (GANs) with limited data generally results in deteriorated performance and collapsed models.
We decompose the data-hungry GAN training into two sequential sub-problems.
Such a coordinated framework enables us to focus on lower-complexity and more data-efficient sub-problems.
arXiv Detail & Related papers (2021-02-28T05:20:29Z) - Unlabeled Data Guided Semi-supervised Histopathology Image Segmentation [34.45302976822067]
Semi-supervised learning (SSL) based on generative methods has been proven to be effective in utilizing diverse image characteristics.
We propose a new data guided generative method for histopathology image segmentation by leveraging the unlabeled data distributions.
Our method is evaluated on glands and nuclei datasets.
arXiv Detail & Related papers (2020-12-17T02:54:19Z) - Data Instance Prior for Transfer Learning in GANs [25.062518859107946]
We propose a novel transfer learning method for GANs in the limited data domain.
We show that the proposed method effectively transfers knowledge to domains with few target images.
We also show the utility of data instance prior in large-scale unconditional image generation and image editing tasks.
arXiv Detail & Related papers (2020-12-08T07:40:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.