Augmentation-Interpolative AutoEncoders for Unsupervised Few-Shot Image
Generation
- URL: http://arxiv.org/abs/2011.13026v1
- Date: Wed, 25 Nov 2020 21:18:55 GMT
- Title: Augmentation-Interpolative AutoEncoders for Unsupervised Few-Shot Image
Generation
- Authors: Davis Wertheimer, Omid Poursaeed and Bharath Hariharan
- Abstract summary: Augmentation-Interpolative AutoEncoders synthesize realistic images of novel objects from only a few reference images.
Our procedure is simple and lightweight, generalizes broadly, and requires no category labels or other supervision during training.
- Score: 45.380129419065746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We aim to build image generation models that generalize to new domains from
few examples. To this end, we first investigate the generalization properties
of classic image generators, and discover that autoencoders generalize
extremely well to new domains, even when trained on highly constrained data. We
leverage this insight to produce a robust, unsupervised few-shot image
generation algorithm, and introduce a novel training procedure based on
recovering an image from data augmentations. Our Augmentation-Interpolative
AutoEncoders synthesize realistic images of novel objects from only a few
reference images, and outperform both prior interpolative models and supervised
few-shot image generators. Our procedure is simple and lightweight, generalizes
broadly, and requires no category labels or other supervision during training.
Related papers
- Zero-Shot Detection of AI-Generated Images [54.01282123570917]
We propose a zero-shot entropy-based detector (ZED) to detect AI-generated images.
Inspired by recent works on machine-generated text detection, our idea is to measure how surprising the image under analysis is compared to a model of real images.
ZED achieves an average improvement of more than 3% over the SoTA in terms of accuracy.
arXiv Detail & Related papers (2024-09-24T08:46:13Z) - Active Generation for Image Classification [45.93535669217115]
We propose to address the efficiency of image generation by focusing on the specific needs and characteristics of the model.
With a central tenet of active learning, our method, named ActGen, takes a training-aware approach to image generation.
arXiv Detail & Related papers (2024-03-11T08:45:31Z) - Online Detection of AI-Generated Images [17.30253784649635]
We study generalization in this setting, training on N models and testing on the next (N+k)
We extend this approach to pixel prediction, demonstrating strong performance using automatically-generated inpainted data.
In addition, for settings where commercial models are not publicly available for automatic data generation, we evaluate if pixel detectors can be trained solely on whole synthetic images.
arXiv Detail & Related papers (2023-10-23T17:53:14Z) - Re-Imagen: Retrieval-Augmented Text-to-Image Generator [58.60472701831404]
Retrieval-Augmented Text-to-Image Generator (Re-Imagen)
Retrieval-Augmented Text-to-Image Generator (Re-Imagen)
arXiv Detail & Related papers (2022-09-29T00:57:28Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Unsupervised Novel View Synthesis from a Single Image [47.37120753568042]
Novel view synthesis from a single image aims at generating novel views from a single input image of an object.
This work aims at relaxing this assumption enabling training of conditional generative model for novel view synthesis in a completely unsupervised manner.
arXiv Detail & Related papers (2021-02-05T16:56:04Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Automated Synthetic-to-Real Generalization [142.41531132965585]
We propose a textitlearning-to-optimize (L2O) strategy to automate the selection of layer-wise learning rates.
We demonstrate that the proposed framework can significantly improve the synthetic-to-real generalization performance without seeing and training on real data.
arXiv Detail & Related papers (2020-07-14T10:57:34Z) - Decoupling Global and Local Representations via Invertible Generative
Flows [47.366299240738094]
Experimental results on standard image benchmarks demonstrate the effectiveness of our model in terms of density estimation, image generation and unsupervised representation learning.
This work demonstrates that a generative model with a likelihood-based objective is capable of learning decoupled representations, requiring no explicit supervision.
arXiv Detail & Related papers (2020-04-12T03:18:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.