Generative Adversarial Nets: Can we generate a new dataset based on only
one training set?
- URL: http://arxiv.org/abs/2210.06005v1
- Date: Wed, 12 Oct 2022 08:22:12 GMT
- Title: Generative Adversarial Nets: Can we generate a new dataset based on only
one training set?
- Authors: Lan V. Truong
- Abstract summary: A generative adversarial network (GAN) is a class of machine learning frameworks designed by Goodfellow et al.
GAN generates new samples from the same distribution as the training set.
In this work, we aim to generate a new dataset that has a different distribution from the training set.
- Score: 16.3460693863947
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A generative adversarial network (GAN) is a class of machine learning
frameworks designed by Goodfellow et al. in 2014. In the GAN framework, the
generative model is pitted against an adversary: a discriminative model that
learns to determine whether a sample is from the model distribution or the data
distribution. GAN generates new samples from the same distribution as the
training set. In this work, we aim to generate a new dataset that has a
different distribution from the training set. In addition, the Jensen-Shannon
divergence between the distributions of the generative and training datasets
can be controlled by some target $\delta \in [0, 1]$. Our work is motivated by
applications in generating new kinds of rice that have similar characteristics
as good rice.
Related papers
- Universality in Transfer Learning for Linear Models [18.427215139020625]
We study the problem of transfer learning in linear models for both regression and binary classification.
We provide an exact and rigorous analysis and relate generalization errors (in regression) and classification errors (in binary classification) for the pretrained and fine-tuned models.
arXiv Detail & Related papers (2024-10-03T03:09:09Z) - Constrained Diffusion Models via Dual Training [80.03953599062365]
Diffusion processes are prone to generating samples that reflect biases in a training dataset.
We develop constrained diffusion models by imposing diffusion constraints based on desired distributions.
We show that our constrained diffusion models generate new data from a mixture data distribution that achieves the optimal trade-off among objective and constraints.
arXiv Detail & Related papers (2024-08-27T14:25:42Z) - Improving Out-of-Distribution Robustness of Classifiers via Generative
Interpolation [56.620403243640396]
Deep neural networks achieve superior performance for learning from independent and identically distributed (i.i.d.) data.
However, their performance deteriorates significantly when handling out-of-distribution (OoD) data.
We develop a simple yet effective method called Generative Interpolation to fuse generative models trained from multiple domains for synthesizing diverse OoD samples.
arXiv Detail & Related papers (2023-07-23T03:53:53Z) - Diff-Instruct: A Universal Approach for Transferring Knowledge From
Pre-trained Diffusion Models [77.83923746319498]
We propose a framework called Diff-Instruct to instruct the training of arbitrary generative models.
We show that Diff-Instruct results in state-of-the-art single-step diffusion-based models.
Experiments on refining GAN models show that the Diff-Instruct can consistently improve the pre-trained generators of GAN models.
arXiv Detail & Related papers (2023-05-29T04:22:57Z) - Improving novelty detection with generative adversarial networks on hand
gesture data [1.3750624267664153]
We propose a novel way of solving the issue of classification of out-of-vocabulary gestures using Artificial Neural Networks (ANNs) trained in the Generative Adversarial Network (GAN) framework.
A generative model augments the data set in an online fashion with new samples and target vectors, while a discriminative model determines the class of the samples.
arXiv Detail & Related papers (2023-04-13T17:50:15Z) - Redes Generativas Adversarias (GAN) Fundamentos Te\'oricos y
Aplicaciones [0.40611352512781856]
Generative adversarial networks (GANs) are a method based on the training of two neural networks, one called generator and the other discriminator.
GANs have a wide range of applications in fields such as computer vision, semantic segmentation, time series synthesis, image editing, natural language processing, and image generation from text.
arXiv Detail & Related papers (2023-02-18T14:39:51Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Generating unrepresented proportions of geological facies using
Generative Adversarial Networks [0.0]
We investigate the capacity of Generative Adversarial Networks (GANs) in interpolating and extrapolating facies proportions in a geological dataset.
Specifically, we design a conditional GANs model that can drive the generated facies toward new proportions not found in the training set.
The presented numerical experiments on images of binary and multiple facies showed good geological consistency as well as strong correlation with the target conditions.
arXiv Detail & Related papers (2022-03-17T22:38:45Z) - Unrolling Particles: Unsupervised Learning of Sampling Distributions [102.72972137287728]
Particle filtering is used to compute good nonlinear estimates of complex systems.
We show in simulations that the resulting particle filter yields good estimates in a wide range of scenarios.
arXiv Detail & Related papers (2021-10-06T16:58:34Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - Imbalanced Data Learning by Minority Class Augmentation using Capsule
Adversarial Networks [31.073558420480964]
We propose a method to restore the balance in imbalanced images, by coalescing two concurrent methods.
In our model, generative and discriminative networks play a novel competitive game.
The coalescing of capsule-GAN is effective at recognizing highly overlapping classes with much fewer parameters compared with the convolutional-GAN.
arXiv Detail & Related papers (2020-04-05T12:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.