Meta Internal Learning
- URL: http://arxiv.org/abs/2110.02900v1
- Date: Wed, 6 Oct 2021 16:27:38 GMT
- Title: Meta Internal Learning
- Authors: Raphael Bensadoun, Shir Gur, Tomer Galanti, Lior Wolf
- Abstract summary: Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
- Score: 88.68276505511922
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Internal learning for single-image generation is a framework, where a
generator is trained to produce novel images based on a single image. Since
these models are trained on a single image, they are limited in their scale and
application. To overcome these issues, we propose a meta-learning approach that
enables training over a collection of images, in order to model the internal
statistics of the sample image more effectively. In the presented meta-learning
approach, a single-image GAN model is generated given an input image, via a
convolutional feedforward hypernetwork $f$. This network is trained over a
dataset of images, allowing for feature sharing among different models, and for
interpolation in the space of generative models. The generated single-image
model contains a hierarchy of multiple generators and discriminators. It is
therefore required to train the meta-learner in an adversarial manner, which
requires careful design choices that we justify by a theoretical analysis. Our
results show that the models obtained are as suitable as single-image GANs for
many common image applications, significantly reduce the training time per
image without loss in performance, and introduce novel capabilities, such as
interpolation and feedforward modeling of novel images.
Related papers
- A Simple Approach to Unifying Diffusion-based Conditional Generation [63.389616350290595]
We introduce a simple, unified framework to handle diverse conditional generation tasks.
Our approach enables versatile capabilities via different inference-time sampling schemes.
Our model supports additional capabilities like non-spatially aligned and coarse conditioning.
arXiv Detail & Related papers (2024-10-15T09:41:43Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Many-to-many Image Generation with Auto-regressive Diffusion Models [59.5041405824704]
This paper introduces a domain-general framework for many-to-many image generation, capable of producing interrelated image series from a given set of images.
We present MIS, a novel large-scale multi-image dataset, containing 12M synthetic multi-image samples, each with 25 interconnected images.
We learn M2M, an autoregressive model for many-to-many generation, where each image is modeled within a diffusion framework.
arXiv Detail & Related papers (2024-04-03T23:20:40Z) - Unlocking Pre-trained Image Backbones for Semantic Image Synthesis [29.688029979801577]
We propose a new class of GAN discriminators for semantic image synthesis that generates highly realistic images.
Our model, which we dub DP-SIMS, achieves state-of-the-art results in terms of image quality and consistency with the input label maps on ADE-20K, COCO-Stuff, and Cityscapes.
arXiv Detail & Related papers (2023-12-20T09:39:19Z) - BlendGAN: Learning and Blending the Internal Distributions of Single
Images by Spatial Image-Identity Conditioning [37.21764919074815]
Single image generative methods are designed to learn the internal patch distribution of a single natural image at multiple scales.
We introduce an extended framework, which allows to simultaneously learn the internal distributions of several images.
Our BlendGAN opens the door to applications that are not supported by single-image models.
arXiv Detail & Related papers (2022-12-03T10:38:27Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - Counterfactual Generative Networks [59.080843365828756]
We propose to decompose the image generation process into independent causal mechanisms that we train without direct supervision.
By exploiting appropriate inductive biases, these mechanisms disentangle object shape, object texture, and background.
We show that the counterfactual images can improve out-of-distribution with a marginal drop in performance on the original classification task.
arXiv Detail & Related papers (2021-01-15T10:23:12Z) - Training End-to-end Single Image Generators without GANs [27.393821783237186]
AugurOne is a novel approach for training single image generative models.
Our approach trains an upscaling neural network using non-affine augmentations of the (single) input image.
A compact latent space is jointly learned allowing for controlled image synthesis.
arXiv Detail & Related papers (2020-04-07T17:58:03Z) - Improved Techniques for Training Single-Image GANs [44.251222212306764]
generative models can be learned from a single image, as opposed to from a large dataset.
We propose some best practices to train a model capable of generating realistic images from only a single sample.
Our model is up to six times faster to train, has fewer parameters, and can better capture the global structure of images.
arXiv Detail & Related papers (2020-03-25T17:33:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.