In&Out : Diverse Image Outpainting via GAN Inversion
- URL: http://arxiv.org/abs/2104.00675v1
- Date: Thu, 1 Apr 2021 17:59:10 GMT
- Title: In&Out : Diverse Image Outpainting via GAN Inversion
- Authors: Yen-Chi Cheng, Chieh Hubert Lin, Hsin-Ying Lee, Jian Ren, Sergey
Tulyakov, Ming-Hsuan Yang
- Abstract summary: Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
- Score: 89.84841983778672
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image outpainting seeks for a semantically consistent extension of the input
image beyond its available content. Compared to inpainting -- filling in
missing pixels in a way coherent with the neighboring pixels -- outpainting can
be achieved in more diverse ways since the problem is less constrained by the
surrounding pixels. Existing image outpainting methods pose the problem as a
conditional image-to-image translation task, often generating repetitive
structures and textures by replicating the content available in the input
image. In this work, we formulate the problem from the perspective of inverting
generative adversarial networks. Our generator renders micro-patches
conditioned on their joint latent code as well as their individual positions in
the image. To outpaint an image, we seek for multiple latent codes not only
recovering available patches but also synthesizing diverse outpainting by
patch-based generation. This leads to richer structure and content in the
outpainted regions. Furthermore, our formulation allows for outpainting
conditioned on the categorical input, thereby enabling flexible user controls.
Extensive experimental results demonstrate the proposed method performs
favorably against existing in- and outpainting methods, featuring higher visual
quality and diversity.
Related papers
- Fill in the ____ (a Diffusion-based Image Inpainting Pipeline) [0.0]
Inpainting is the process of taking an image and generating lost or intentionally occluded portions.
Modern inpainting techniques have shown remarkable ability in generating sensible completions.
A critical gap in these existing models will be addressed, focusing on the ability to prompt and control what exactly is generated.
arXiv Detail & Related papers (2024-03-24T05:26:55Z) - Continuous-Multiple Image Outpainting in One-Step via Positional Query
and A Diffusion-based Approach [104.2588068730834]
This paper pushes the technical frontier of image outpainting in two directions that have not been resolved in literature.
We develop a method that does not depend on a pre-trained backbone network.
We evaluate the proposed approach (called PQDiff) on public benchmarks, demonstrating its superior performance over state-of-the-art approaches.
arXiv Detail & Related papers (2024-01-28T13:00:38Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z) - Cylin-Painting: Seamless {360\textdegree} Panoramic Image Outpainting
and Beyond [136.18504104345453]
We present a Cylin-Painting framework that involves meaningful collaborations between inpainting and outpainting.
The proposed algorithm can be effectively extended to other panoramic vision tasks, such as object detection, depth estimation, and image super-resolution.
arXiv Detail & Related papers (2022-04-18T21:18:49Z) - TransFill: Reference-guided Image Inpainting by Merging Multiple Color
and Spatial Transformations [35.9576572490994]
We propose TransFill, a multi-homography transformed fusion method to fill the hole by referring to another source image that shares scene contents with the target image.
We learn to adjust the color and apply a pixel-level warping to each homography-warped source image to make it more consistent with the target.
Our method achieves state-of-the-art performance on pairs of images across a variety of wide baselines and color differences, and generalizes to user-provided image pairs.
arXiv Detail & Related papers (2021-03-29T22:45:07Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - Painting Outside as Inside: Edge Guided Image Outpainting via
Bidirectional Rearrangement with Progressive Step Learning [18.38266676724225]
We propose a novel image outpainting method using bidirectional boundary region rearrangement.
The proposed method is compared with other state-of-the-art outpainting and inpainting methods both qualitatively and quantitatively.
The experimental results demonstrate that our method outperforms other methods and generates new images with 360degpanoramic characteristics.
arXiv Detail & Related papers (2020-10-05T06:53:55Z) - Very Long Natural Scenery Image Prediction by Outpainting [96.8509015981031]
Outpainting receives less attention due to two challenges in it.
First challenge is how to keep the spatial and content consistency between generated images and original input.
Second challenge is how to maintain high quality in generated results.
arXiv Detail & Related papers (2019-12-29T16:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.