Generator Pyramid for High-Resolution Image Inpainting
- URL: http://arxiv.org/abs/2012.02381v1
- Date: Fri, 4 Dec 2020 03:27:48 GMT
- Title: Generator Pyramid for High-Resolution Image Inpainting
- Authors: Leilei Cao, Tong Yang, Yixu Wang, Bo Yan, Yandong Guo
- Abstract summary: Inpainting high-resolution images with large holes challenges existing deep learning based image inpainting methods.
We present a novel framework -- PyramidFill for high-resolution image inpainting task, which explicitly disentangles content completion and texture synthesis.
Our model consists of a pyramid of fully convolutional GANs, wherein the content GAN is responsible for completing contents in the lowest-resolution masked image, and each texture GAN is responsible for synthesizing textures in a higher-resolution image.
- Score: 12.915306385626828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inpainting high-resolution images with large holes challenges existing deep
learning based image inpainting methods. We present a novel framework --
PyramidFill for high-resolution image inpainting task, which explicitly
disentangles content completion and texture synthesis. PyramidFill attempts to
complete the content of unknown regions in a lower-resolution image, and
synthesis the textures of unknown regions in a higher-resolution image,
progressively. Thus, our model consists of a pyramid of fully convolutional
GANs, wherein the content GAN is responsible for completing contents in the
lowest-resolution masked image, and each texture GAN is responsible for
synthesizing textures in a higher-resolution image. Since completing contents
and synthesising textures demand different abilities from generators, we
customize different architectures for the content GAN and texture GAN.
Experiments on multiple datasets including CelebA-HQ, Places2 and a new natural
scenery dataset (NSHQ) with different resolutions demonstrate that PyramidFill
generates higher-quality inpainting results than the state-of-the-art methods.
To better assess high-resolution image inpainting methods, we will release
NSHQ, high-quality natural scenery images with high-resolution
1920$\times$1080.
Related papers
- Meta 3D TextureGen: Fast and Consistent Texture Generation for 3D Objects [54.80813150893719]
We introduce Meta 3D TextureGen: a new feedforward method comprised of two sequential networks aimed at generating high-quality textures in less than 20 seconds.
Our method state-of-the-art results in quality and speed by conditioning a text-to-image model on 3D semantics in 2D space and fusing them into a complete and high-resolution UV texture map.
In addition, we introduce a texture enhancement network that is capable of up-scaling any texture by an arbitrary ratio, producing 4k pixel resolution textures.
arXiv Detail & Related papers (2024-07-02T17:04:34Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - TwinTex: Geometry-aware Texture Generation for Abstracted 3D
Architectural Models [13.248386665044087]
We present TwinTex, the first automatic texture mapping framework to generate a photo-realistic texture for a piece-wise planar proxy.
Our approach surpasses state-of-the-art texture mapping methods in terms of high-fidelity quality and reaches a human-expert production level with much less effort.
arXiv Detail & Related papers (2023-09-20T12:33:53Z) - Pyramid Texture Filtering [86.15126028139736]
We present a simple but effective technique to smooth out textures while preserving the prominent structures.
Our method is built upon a key observation -- the coarsest level in a Gaussian pyramid often naturally eliminates textures and summarizes the main image structures.
We show that our approach is effective to separate structure from texture of different scales, local contrasts, and forms, without degrading structures or introducing visual artifacts.
arXiv Detail & Related papers (2023-05-11T02:05:30Z) - Structure First Detail Next: Image Inpainting with Pyramid Generator [26.94101909283021]
We propose to build a Pyramid Generator by stacking several sub-generators.
Lower-layer sub-generators focus on restoring image structures while the higher-layer sub-generators emphasize image details.
Our approach has a learning scheme of progressively increasing hole size, which allows it to restore large-hole images.
arXiv Detail & Related papers (2021-06-16T16:00:16Z) - InfinityGAN: Towards Infinite-Resolution Image Synthesis [92.40782797030977]
We present InfinityGAN, a method to generate arbitrary-resolution images.
We show how it trains and infers patch-by-patch seamlessly with low computational resources.
arXiv Detail & Related papers (2021-04-08T17:59:30Z) - Aggregated Contextual Transformations for High-Resolution Image
Inpainting [57.241749273816374]
We propose an enhanced GAN-based model, named Aggregated COntextual-Transformation GAN (AOT-GAN) for high-resolution image inpainting.
To enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block.
For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task.
arXiv Detail & Related papers (2021-04-03T15:50:17Z) - Generating Diverse Structure for Image Inpainting With Hierarchical
VQ-VAE [74.29384873537587]
We propose a two-stage model for diverse inpainting, where the first stage generates multiple coarse results each of which has a different structure, and the second stage refines each coarse result separately by augmenting texture.
Experimental results on CelebA-HQ, Places2, and ImageNet datasets show that our method not only enhances the diversity of the inpainting solutions but also improves the visual quality of the generated multiple images.
arXiv Detail & Related papers (2021-03-18T05:10:49Z) - Efficient texture-aware multi-GAN for image inpainting [5.33024001730262]
Recent GAN-based (Generative adversarial networks) inpainting methods show remarkable improvements.
We propose a multi-GAN architecture improving both the performance and rendering efficiency.
arXiv Detail & Related papers (2020-09-30T14:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.