Interpreting Spatially Infinite Generative Models
- URL: http://arxiv.org/abs/2007.12411v1
- Date: Fri, 24 Jul 2020 09:00:41 GMT
- Title: Interpreting Spatially Infinite Generative Models
- Authors: Chaochao Lu, Richard E. Turner, Yingzhen Li, Nate Kushman
- Abstract summary: Recent work has shown that feeding spatial noise vectors into a fully convolutional neural network enables both generation of arbitrary resolution output images and training on arbitrary resolution training images.
We provide a firm theoretical interpretation for infinite spatial generation, by drawing connections to spatial processes.
Experiments on world map generation, panoramic images and texture synthesis verify the ability of $infty$-GAN to efficiently generate images of arbitrary size.
- Score: 40.453301580034804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional deep generative models of images and other spatial modalities can
only generate fixed sized outputs. The generated images have exactly the same
resolution as the training images, which is dictated by the number of layers in
the underlying neural network. Recent work has shown, however, that feeding
spatial noise vectors into a fully convolutional neural network enables both
generation of arbitrary resolution output images as well as training on
arbitrary resolution training images. While this work has provided impressive
empirical results, little theoretical interpretation was provided to explain
the underlying generative process. In this paper we provide a firm theoretical
interpretation for infinite spatial generation, by drawing connections to
spatial stochastic processes. We use the resulting intuition to improve upon
existing spatially infinite generative models to enable more efficient training
through a model that we call an infinite generative adversarial network, or
$\infty$-GAN. Experiments on world map generation, panoramic images and texture
synthesis verify the ability of $\infty$-GAN to efficiently generate images of
arbitrary size.
Related papers
- Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - NeurInt : Learning to Interpolate through Neural ODEs [18.104328632453676]
We propose a novel generative model that learns a distribution of trajectories between two images.
We demonstrate our approach's effectiveness in generating images improved quality as well as its ability to learn a diverse distribution over smooth trajectories for any pair of real source and target images.
arXiv Detail & Related papers (2021-11-07T16:31:18Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - InfinityGAN: Towards Infinite-Resolution Image Synthesis [92.40782797030977]
We present InfinityGAN, a method to generate arbitrary-resolution images.
We show how it trains and infers patch-by-patch seamlessly with low computational resources.
arXiv Detail & Related papers (2021-04-08T17:59:30Z) - Principled network extraction from images [0.0]
We present a principled model to extract network topologies from images that is scalable and efficient.
We test our model on real images of the retinal vascular system, slime mold and river networks.
arXiv Detail & Related papers (2020-12-23T15:56:09Z) - Unsupervised Discovery of Disentangled Manifolds in GANs [74.24771216154105]
Interpretable generation process is beneficial to various image editing applications.
We propose a framework to discover interpretable directions in the latent space given arbitrary pre-trained generative adversarial networks.
arXiv Detail & Related papers (2020-11-24T02:18:08Z) - Improving Inversion and Generation Diversity in StyleGAN using a
Gaussianized Latent Space [41.20193123974535]
Modern Generative Adversarial Networks are capable of creating artificial, photorealistic images from latent vectors living in a low-dimensional learned latent space.
We show that, under a simple nonlinear operation, the data distribution can be modeled as Gaussian and therefore expressed using sufficient statistics.
The resulting projections lie in smoother and better behaved regions of the latent space, as shown using performance for both real and generated images.
arXiv Detail & Related papers (2020-09-14T15:45:58Z) - Network Bending: Expressive Manipulation of Deep Generative Models [0.2062593640149624]
We introduce a new framework for manipulating and interacting with deep generative models that we call network bending.
We show how it allows for the direct manipulation of semantically meaningful aspects of the generative process as well as allowing for a broad range of expressive outcomes.
arXiv Detail & Related papers (2020-05-25T21:48:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.