Synthesizing Informative Training Samples with GAN
- URL: http://arxiv.org/abs/2204.07513v1
- Date: Fri, 15 Apr 2022 15:16:01 GMT
- Title: Synthesizing Informative Training Samples with GAN
- Authors: Bo Zhao, Hakan Bilen
- Abstract summary: We propose a novel method to synthesize Informative Training samples with GAN (IT-GAN)
Specifically, we freeze a pre-trained GAN model and learn the informative latent vectors that corresponds to informative training samples.
Experiments verify that the deep neural networks can learn faster and achieve better performance when being trained with our IT-GAN generated images.
- Score: 31.225934266572192
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Remarkable progress has been achieved in synthesizing photo-realistic images
with generative adversarial neural networks (GANs). Recently, GANs are utilized
as the training sample generator when obtaining or storing real training data
is expensive even infeasible. However, traditional GANs generated images are
not as informative as the real training samples when being used to train deep
neural networks. In this paper, we propose a novel method to synthesize
Informative Training samples with GAN (IT-GAN). Specifically, we freeze a
pre-trained GAN model and learn the informative latent vectors that corresponds
to informative training samples. The synthesized images are required to
preserve information for training deep neural networks rather than visual
reality or fidelity. Experiments verify that the deep neural networks can learn
faster and achieve better performance when being trained with our IT-GAN
generated images. We also show that our method is a promising solution to
dataset condensation problem.
Related papers
- Image Captions are Natural Prompts for Text-to-Image Models [70.30915140413383]
We analyze the relationship between the training effect of synthetic data and the synthetic data distribution induced by prompts.
We propose a simple yet effective method that prompts text-to-image generative models to synthesize more informative and diverse training data.
Our method significantly improves the performance of models trained on synthetic training data.
arXiv Detail & Related papers (2023-07-17T14:38:11Z) - Improving GAN Training via Feature Space Shrinkage [69.98365478398593]
We propose AdaptiveMix, which shrinks regions of training data in the image representation space of the discriminator.
Considering it is intractable to directly bound feature space, we propose to construct hard samples and narrow down the feature distance between hard and easy samples.
The evaluation results demonstrate that our AdaptiveMix can facilitate the training of GANs and effectively improve the image quality of generated samples.
arXiv Detail & Related papers (2023-03-02T20:22:24Z) - Zero-Shot Learning of a Conditional Generative Adversarial Network for
Data-Free Network Quantization [44.22469647001933]
We propose a novel method for training a conditional generative adversarial network (CGAN) without the use of training data.
Zero-shot learning of a conditional generator only needs a pre-trained discriminative (classification) model and does not need any training data.
We show the usefulness of ZS-CGAN in data-free quantization of deep neural networks.
arXiv Detail & Related papers (2022-10-26T00:05:57Z) - A Survey on Leveraging Pre-trained Generative Adversarial Networks for
Image Editing and Restoration [72.17890189820665]
Generative adversarial networks (GANs) have drawn enormous attention due to the simple yet effective training mechanism and superior image generation quality.
Recent GAN models have greatly narrowed the gaps between the generated images and the real ones.
Many recent works show emerging interest to take advantage of pre-trained GAN models by exploiting the well-disentangled latent space and the learned GAN priors.
arXiv Detail & Related papers (2022-07-21T05:05:58Z) - Reconstructing Training Data from Trained Neural Networks [42.60217236418818]
We show in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier.
We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods.
arXiv Detail & Related papers (2022-06-15T18:35:16Z) - BIM Hyperreality: Data Synthesis Using BIM and Hyperrealistic Rendering
for Deep Learning [3.4461633417989184]
We present a concept of a hybrid system for training a neural network for building object recognition in photos.
For the specific case study presented in this paper, our results show that a neural network trained with synthetic data can be used to identify building objects from photos without using photos in the training data.
arXiv Detail & Related papers (2021-05-10T04:08:24Z) - Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then
Training It Toughly [114.81028176850404]
Training generative adversarial networks (GANs) with limited data generally results in deteriorated performance and collapsed models.
We decompose the data-hungry GAN training into two sequential sub-problems.
Such a coordinated framework enables us to focus on lower-complexity and more data-efficient sub-problems.
arXiv Detail & Related papers (2021-02-28T05:20:29Z) - Generative Zero-shot Network Quantization [41.75769117366117]
Convolutional neural networks are able to learn realistic image priors from numerous training samples in low-level image generation and restoration.
We show that, for high-level image recognition tasks, we can further reconstruct "realistic" images of each category by leveraging intrinsic Batch Normalization (BN) statistics without any training data.
arXiv Detail & Related papers (2021-01-21T04:10:04Z) - Syn2Real Transfer Learning for Image Deraining using Gaussian Processes [92.15895515035795]
CNN-based methods for image deraining have achieved excellent performance in terms of reconstruction error as well as visual quality.
Due to challenges in obtaining real world fully-labeled image deraining datasets, existing methods are trained only on synthetically generated data.
We propose a Gaussian Process-based semi-supervised learning framework which enables the network in learning to derain using synthetic dataset.
arXiv Detail & Related papers (2020-06-10T00:33:18Z) - Light-in-the-loop: using a photonics co-processor for scalable training
of neural networks [21.153688679957337]
We present the first optical co-processor able to accelerate the training phase of digitally-implemented neural networks.
We demonstrate its use to train a neural network for handwritten digits recognition.
arXiv Detail & Related papers (2020-06-02T09:19:45Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.