Smoothing the Generative Latent Space with Mixup-based Distance Learning
- URL: http://arxiv.org/abs/2111.11672v1
- Date: Tue, 23 Nov 2021 06:39:50 GMT
- Title: Smoothing the Generative Latent Space with Mixup-based Distance Learning
- Authors: Chaerin Kong, Jeesoo Kim, Donghoon Han and Nojun Kwak
- Abstract summary: We consider the situation where neither large scale dataset of our interest nor transferable source dataset is available.
We propose latent mixup-based distance regularization on the feature space of both a generator and the counterpart discriminator.
- Score: 32.838539968751924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Producing diverse and realistic images with generative models such as GANs
typically requires large scale training with vast amount of images. GANs
trained with extremely limited data can easily overfit to few training samples
and display undesirable properties like "stairlike" latent space where
transitions in latent space suffer from discontinuity, occasionally yielding
abrupt changes in outputs. In this work, we consider the situation where
neither large scale dataset of our interest nor transferable source dataset is
available, and seek to train existing generative models with minimal
overfitting and mode collapse. We propose latent mixup-based distance
regularization on the feature space of both a generator and the counterpart
discriminator that encourages the two players to reason not only about the
scarce observed data points but the relative distances in the feature space
they reside. Qualitative and quantitative evaluation on diverse datasets
demonstrates that our method is generally applicable to existing models to
enhance both fidelity and diversity under the constraint of limited data. Code
will be made public.
Related papers
- Privacy-preserving datasets by capturing feature distributions with Conditional VAEs [0.11999555634662634]
Conditional Variational Autoencoders (CVAEs) trained on feature vectors extracted from large pre-trained vision foundation models.
Our method notably outperforms traditional approaches in both medical and natural image domains.
Results underscore the potential of generative models to significantly impact deep learning applications in data-scarce and privacy-sensitive environments.
arXiv Detail & Related papers (2024-08-01T15:26:24Z) - Variational latent discrete representation for time series modelling [0.0]
We introduce a latent data model where the discrete state is a Markov chain, which allows fast end-to-end training.
The performance of our generative model is assessed on a building management dataset and on the publicly available Electricity Transformer dataset.
arXiv Detail & Related papers (2023-06-27T08:15:05Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Progressive Multi-view Human Mesh Recovery with Self-Supervision [68.60019434498703]
Existing solutions typically suffer from poor generalization performance to new settings.
We propose a novel simulation-based training pipeline for multi-view human mesh recovery.
arXiv Detail & Related papers (2022-12-10T06:28:29Z) - Latent Space is Feature Space: Regularization Term for GANs Training on
Limited Dataset [1.8634083978855898]
I proposed an additional structure and loss function for GANs called LFM, trained to maximize the feature diversity between the different dimensions of the latent space.
In experiments, this system has been built upon DCGAN and proved to have improvement on Frechet Inception Distance (FID) training from scratch on CelebA dataset.
arXiv Detail & Related papers (2022-10-28T16:34:48Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Few Shot Generative Model Adaption via Relaxed Spatial Structural
Alignment [130.84010267004803]
Training a generative adversarial network (GAN) with limited data has been a challenging task.
A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption.
We propose a relaxed spatial structural alignment method to calibrate the target generative models during the adaption.
arXiv Detail & Related papers (2022-03-06T14:26:25Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Flow Based Models For Manifold Data [11.344428134774475]
Flow-based generative models typically define a latent space with dimensionality identical to the observational space.
In many problems, the data does not populate the full ambient data-space that they reside in, rather a lower-dimensional manifold.
We propose to learn a manifold prior that affords benefits to both sample generation and representation quality.
arXiv Detail & Related papers (2021-09-29T06:48:01Z) - Evidential Sparsification of Multimodal Latent Spaces in Conditional
Variational Autoencoders [63.46738617561255]
We consider the problem of sparsifying the discrete latent space of a trained conditional variational autoencoder.
We use evidential theory to identify the latent classes that receive direct evidence from a particular input condition and filter out those that do not.
Experiments on diverse tasks, such as image generation and human behavior prediction, demonstrate the effectiveness of our proposed technique.
arXiv Detail & Related papers (2020-10-19T01:27:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.