FewGAN: Generating from the Joint Distribution of a Few Images
- URL: http://arxiv.org/abs/2207.11226v1
- Date: Mon, 18 Jul 2022 07:11:28 GMT
- Title: FewGAN: Generating from the Joint Distribution of a Few Images
- Authors: Lior Ben-Moshe, Sagie Benaim, Lior Wolf
- Abstract summary: We introduce FewGAN, a generative model for generating novel, high-quality and diverse images.
FewGAN is a hierarchical patch-GAN that applies quantization at the first coarse scale, followed by a pyramid of residual fully convolutional GANs at finer scales.
In an extensive set of experiments, it is shown that FewGAN outperforms baselines both quantitatively and qualitatively.
- Score: 95.6635227371479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce FewGAN, a generative model for generating novel, high-quality
and diverse images whose patch distribution lies in the joint patch
distribution of a small number of N>1 training samples. The method is, in
essence, a hierarchical patch-GAN that applies quantization at the first coarse
scale, in a similar fashion to VQ-GAN, followed by a pyramid of residual fully
convolutional GANs at finer scales. Our key idea is to first use quantization
to learn a fixed set of patch embeddings for training images. We then use a
separate set of side images to model the structure of generated images using an
autoregressive model trained on the learned patch embeddings of training
images. Using quantization at the coarsest scale allows the model to generate
both conditional and unconditional novel images. Subsequently, a patch-GAN
renders the fine details, resulting in high-quality images. In an extensive set
of experiments, it is shown that FewGAN outperforms baselines both
quantitatively and qualitatively.
Related papers
- Diversified in-domain synthesis with efficient fine-tuning for few-shot
classification [64.86872227580866]
Few-shot image classification aims to learn an image classifier using only a small set of labeled examples per class.
We propose DISEF, a novel approach which addresses the generalization challenge in few-shot learning using synthetic data.
We validate our method in ten different benchmarks, consistently outperforming baselines and establishing a new state-of-the-art for few-shot classification.
arXiv Detail & Related papers (2023-12-05T17:18:09Z) - DEff-GAN: Diverse Attribute Transfer for Few-Shot Image Synthesis [0.38073142980733]
We extend the single-image GAN method to model multiple images for sample synthesis.
Our Data-Efficient GAN (DEff-GAN) generates excellent results when similarities and correspondences can be drawn between the input images or classes.
arXiv Detail & Related papers (2023-02-28T12:43:52Z) - Generative Modeling in Structural-Hankel Domain for Color Image
Inpainting [17.04134647990754]
This study aims to construct the low-rank structural-Hankel matrices-assisted score-based generative model (SHGM) for color image inpainting task.
Experimental results demonstrated the remarkable performance and diversity of SHGM.
arXiv Detail & Related papers (2022-11-25T01:56:17Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Set Based Stochastic Subsampling [85.5331107565578]
We propose a set-based two-stage end-to-end neural subsampling model that is jointly optimized with an textitarbitrary downstream task network.
We show that it outperforms the relevant baselines under low subsampling rates on a variety of tasks including image classification, image reconstruction, function reconstruction and few-shot classification.
arXiv Detail & Related papers (2020-06-25T07:36:47Z) - Locally Masked Convolution for Autoregressive Models [107.4635841204146]
LMConv is a simple modification to the standard 2D convolution that allows arbitrary masks to be applied to the weights at each location in the image.
We learn an ensemble of distribution estimators that share parameters but differ in generation order, achieving improved performance on whole-image density estimation.
arXiv Detail & Related papers (2020-06-22T17:59:07Z) - Training End-to-end Single Image Generators without GANs [27.393821783237186]
AugurOne is a novel approach for training single image generative models.
Our approach trains an upscaling neural network using non-affine augmentations of the (single) input image.
A compact latent space is jointly learned allowing for controlled image synthesis.
arXiv Detail & Related papers (2020-04-07T17:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.