Set Distribution Networks: a Generative Model for Sets of Images
- URL: http://arxiv.org/abs/2006.10705v1
- Date: Thu, 18 Jun 2020 17:38:56 GMT
- Title: Set Distribution Networks: a Generative Model for Sets of Images
- Authors: Shuangfei Zhai, Walter Talbott, Miguel Angel Bautista, Carlos
Guestrin, Josh M. Susskind
- Abstract summary: We introduce Set Distribution Networks (SDNs), a framework that learns to autoencode and freely generate sets.
We show that SDNs are able to reconstruct image sets that preserve salient attributes of the inputs in our benchmark datasets.
We examine the sets generated by SDN with a pre-trained 3D reconstruction network and a face verification network, respectively, as a novel way to evaluate the quality of generated sets of images.
- Score: 22.405670277339023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Images with shared characteristics naturally form sets. For example, in a
face verification benchmark, images of the same identity form sets. For
generative models, the standard way of dealing with sets is to represent each
as a one hot vector, and learn a conditional generative model
$p(\mathbf{x}|\mathbf{y})$. This representation assumes that the number of sets
is limited and known, such that the distribution over sets reduces to a simple
multinomial distribution. In contrast, we study a more generic problem where
the number of sets is large and unknown. We introduce Set Distribution Networks
(SDNs), a novel framework that learns to autoencode and freely generate sets.
We achieve this by jointly learning a set encoder, set discriminator, set
generator, and set prior. We show that SDNs are able to reconstruct image sets
that preserve salient attributes of the inputs in our benchmark datasets, and
are also able to generate novel objects/identities. We examine the sets
generated by SDN with a pre-trained 3D reconstruction network and a face
verification network, respectively, as a novel way to evaluate the quality of
generated sets of images.
Related papers
- FaceCoresetNet: Differentiable Coresets for Face Set Recognition [16.879093388124964]
A discriminative descriptor balances two policies when aggregating information from a given set.
This work frames face-set representation as a differentiable coreset selection problem.
We set a new SOTA to set face verification on the IJB-B and IJB-C datasets.
arXiv Detail & Related papers (2023-08-27T11:38:42Z) - Using a Conditional Generative Adversarial Network to Control the
Statistical Characteristics of Generated Images for IACT Data Analysis [55.41644538483948]
We divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images.
In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size)
We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment.
arXiv Detail & Related papers (2022-11-28T22:30:33Z) - Traditional Classification Neural Networks are Good Generators: They are
Competitive with DDPMs and GANs [104.72108627191041]
We show that conventional neural network classifiers can generate high-quality images comparable to state-of-the-art generative models.
We propose a mask-based reconstruction module to make semantic gradients-aware to synthesize plausible images.
We show that our method is also applicable to text-to-image generation by regarding image-text foundation models.
arXiv Detail & Related papers (2022-11-27T11:25:35Z) - Generative Adversarial Nets: Can we generate a new dataset based on only
one training set? [16.3460693863947]
A generative adversarial network (GAN) is a class of machine learning frameworks designed by Goodfellow et al.
GAN generates new samples from the same distribution as the training set.
In this work, we aim to generate a new dataset that has a different distribution from the training set.
arXiv Detail & Related papers (2022-10-12T08:22:12Z) - Collaging Class-specific GANs for Semantic Image Synthesis [68.87294033259417]
We propose a new approach for high resolution semantic image synthesis.
It consists of one base image generator and multiple class-specific generators.
Experiments show that our approach can generate high quality images in high resolution.
arXiv Detail & Related papers (2021-10-08T17:46:56Z) - Top-N: Equivariant set and graph generation without exchangeability [61.24699600833916]
We consider one-shot probabilistic decoders that map a vector-shaped prior to a distribution over sets or graphs.
These functions can be integrated into variational autoencoders (VAE), generative adversarial networks (GAN) or normalizing flows.
Top-n is a deterministic, non-exchangeable set creation mechanism which learns to select the most relevant points from a trainable reference set.
arXiv Detail & Related papers (2021-10-05T14:51:19Z) - Generate High Resolution Images With Generative Variational Autoencoder [0.0]
We present a novel neural network to generate high resolution images.
We replace the decoder of VAE with a discriminator while using the encoder as it is.
We evaluate our network on 3 different datasets: MNIST, LSUN and CelebA dataset.
arXiv Detail & Related papers (2020-08-12T20:15:34Z) - Conditional Set Generation with Transformers [15.315473956458227]
A set is an unordered collection of unique elements.
Many machine learning models generate sets that impose an implicit or explicit ordering.
An alternative solution is to use a permutation-equivariant set generator, which does not specify an order-ing.
We introduce the Transformer Set Prediction Network (TSPN), a flexible permutation-equivariant model for set prediction.
arXiv Detail & Related papers (2020-06-26T17:52:27Z) - Locally Masked Convolution for Autoregressive Models [107.4635841204146]
LMConv is a simple modification to the standard 2D convolution that allows arbitrary masks to be applied to the weights at each location in the image.
We learn an ensemble of distribution estimators that share parameters but differ in generation order, achieving improved performance on whole-image density estimation.
arXiv Detail & Related papers (2020-06-22T17:59:07Z) - Learn to Predict Sets Using Feed-Forward Neural Networks [63.91494644881925]
This paper addresses the task of set prediction using deep feed-forward neural networks.
We present a novel approach for learning to predict sets with unknown permutation and cardinality.
We demonstrate the validity of our set formulations on relevant vision problems.
arXiv Detail & Related papers (2020-01-30T01:52:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.