Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image
Generative Models
- URL: http://arxiv.org/abs/2103.15545v1
- Date: Mon, 29 Mar 2021 12:20:46 GMT
- Title: Drop the GAN: In Defense of Patches Nearest Neighbors as Single Image
Generative Models
- Authors: Niv Granot, Assaf Shocher, Ben Feinstein, Shai Bagon and Michal Irani
- Abstract summary: We show that all of these tasks can be performed without any training, within several seconds, in a unified, surprisingly simple framework.
We start with an initial coarse guess, and then simply refine the details coarse-to-fine using patch-nearest-neighbor search.
This allows generating random novel images better and much faster than GANs.
- Score: 17.823089978609843
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Single image generative models perform synthesis and manipulation tasks by
capturing the distribution of patches within a single image. The classical (pre
Deep Learning) prevailing approaches for these tasks are based on an
optimization process that maximizes patch similarity between the input and
generated output. Recently, however, Single Image GANs were introduced both as
a superior solution for such manipulation tasks, but also for remarkable novel
generative tasks. Despite their impressiveness, single image GANs require long
training time (usually hours) for each image and each task. They often suffer
from artifacts and are prone to optimization issues such as mode collapse. In
this paper, we show that all of these tasks can be performed without any
training, within several seconds, in a unified, surprisingly simple framework.
We revisit and cast the "good-old" patch-based methods into a novel
optimization-free framework. We start with an initial coarse guess, and then
simply refine the details coarse-to-fine using patch-nearest-neighbor search.
This allows generating random novel images better and much faster than GANs. We
further demonstrate a wide range of applications, such as image editing and
reshuffling, retargeting to different sizes, structural analogies, image
collage and a newly introduced task of conditional inpainting. Not only is our
method faster ($\times 10^3$-$\times 10^4$ than a GAN), it produces superior
results (confirmed by quantitative and qualitative evaluation), less artifacts
and more realistic global structure than any of the previous approaches
(whether GAN-based or classical patch-based).
Related papers
- GLEAN: Generative Latent Bank for Image Super-Resolution and Beyond [99.6233044915999]
We show that pre-trained Generative Adversarial Networks (GANs) such as StyleGAN and BigGAN can be used as a latent bank to improve the performance of image super-resolution.
Our method, Generative LatEnt bANk (GLEAN), goes beyond existing practices by directly leveraging rich and diverse priors encapsulated in a pre-trained GAN.
We extend our method to different tasks including image colorization and blind image restoration, and extensive experiments show that our proposed models perform favorably in comparison to existing methods.
arXiv Detail & Related papers (2022-07-29T17:59:01Z) - FewGAN: Generating from the Joint Distribution of a Few Images [95.6635227371479]
We introduce FewGAN, a generative model for generating novel, high-quality and diverse images.
FewGAN is a hierarchical patch-GAN that applies quantization at the first coarse scale, followed by a pyramid of residual fully convolutional GANs at finer scales.
In an extensive set of experiments, it is shown that FewGAN outperforms baselines both quantitatively and qualitatively.
arXiv Detail & Related papers (2022-07-18T07:11:28Z) - Generating natural images with direct Patch Distributions Matching [7.99536002595393]
We develop an algorithm that explicitly and efficiently minimizes the distance between patch distributions in two images.
Our results are often superior to single-image-GANs, require no training, and can generate high quality images in a few seconds.
arXiv Detail & Related papers (2022-03-22T16:38:52Z) - InvGAN: Invertible GANs [88.58338626299837]
InvGAN, short for Invertible GAN, successfully embeds real images to the latent space of a high quality generative model.
This allows us to perform image inpainting, merging, and online data augmentation.
arXiv Detail & Related papers (2021-12-08T21:39:00Z) - SDEdit: Image Synthesis and Editing with Stochastic Differential
Equations [113.35735935347465]
We introduce Differential Editing (SDEdit), based on a recent generative model using differential equations (SDEs)
Given an input image with user edits, we first add noise to the input according to an SDE, and subsequently denoise it by simulating the reverse SDE to gradually increase its likelihood under the prior.
Our method does not require task-specific loss function designs, which are critical components for recent image editing methods based on GAN inversions.
arXiv Detail & Related papers (2021-08-02T17:59:47Z) - InfinityGAN: Towards Infinite-Resolution Image Synthesis [92.40782797030977]
We present InfinityGAN, a method to generate arbitrary-resolution images.
We show how it trains and infers patch-by-patch seamlessly with low computational resources.
arXiv Detail & Related papers (2021-04-08T17:59:30Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - Training End-to-end Single Image Generators without GANs [27.393821783237186]
AugurOne is a novel approach for training single image generative models.
Our approach trains an upscaling neural network using non-affine augmentations of the (single) input image.
A compact latent space is jointly learned allowing for controlled image synthesis.
arXiv Detail & Related papers (2020-04-07T17:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.