Texture Memory-Augmented Deep Patch-Based Image Inpainting
- URL: http://arxiv.org/abs/2009.13240v2
- Date: Thu, 4 Nov 2021 04:11:47 GMT
- Title: Texture Memory-Augmented Deep Patch-Based Image Inpainting
- Authors: Rui Xu, Minghao Guo, Jiaqi Wang, Xiaoxiao Li, Bolei Zhou, Chen Change
Loy
- Abstract summary: We propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions.
The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network.
The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks.
- Score: 121.41395272974611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Patch-based methods and deep networks have been employed to tackle image
inpainting problem, with their own strengths and weaknesses. Patch-based
methods are capable of restoring a missing region with high-quality texture
through searching nearest neighbor patches from the unmasked regions. However,
these methods bring problematic contents when recovering large missing regions.
Deep networks, on the other hand, show promising results in completing large
regions. Nonetheless, the results often lack faithful and sharp details that
resemble the surrounding area. By bringing together the best of both paradigms,
we propose a new deep inpainting framework where texture generation is guided
by a texture memory of patch samples extracted from unmasked regions. The
framework has a novel design that allows texture memory retrieval to be trained
end-to-end with the deep inpainting network. In addition, we introduce a patch
distribution loss to encourage high-quality patch synthesis. The proposed
method shows superior performance both qualitatively and quantitatively on
three challenging image benchmarks, i.e., Places, CelebA-HQ, and Paris
Street-View datasets.
Related papers
- ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - EXTRACTER: Efficient Texture Matching with Attention and Gradient
Enhancing for Large Scale Image Super Resolution [0.0]
Recent Reference-Based image super-resolution (RefSR) has improved SOTA deep methods introducing attention mechanisms to enhance low-resolution images.
We propose a deep search with a more efficient memory usage that reduces significantly the number of image patches.
arXiv Detail & Related papers (2023-10-02T17:41:56Z) - Delving Globally into Texture and Structure for Image Inpainting [20.954875933730808]
Image inpainting has achieved remarkable progress and inspired abundant methods, where the critical bottleneck is identified as how to fulfill the high-frequency structure and low-frequency texture information on the masked regions with semantics.
In this paper, we delve globally into texture and structure information to well capture the semantics for image inpainting.
Our model is tovolution to the fashionable arts, such as Conal Neural Networks (CNNs), Attention and Transformer model, from the perspective of texture and structure information for image inpainting.
arXiv Detail & Related papers (2022-09-17T02:19:26Z) - Patch-Based Stochastic Attention for Image Editing [4.8201607588546]
We propose an efficient attention layer based on the algorithm PatchMatch, which is used for determining approximate nearest neighbors.
We demonstrate the usefulness of PSAL on several image editing tasks, such as image inpainting, guided image colorization, and single-image super-resolution.
arXiv Detail & Related papers (2022-02-07T13:42:00Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z) - Texture Transform Attention for Realistic Image Inpainting [6.275013056564918]
We propose a Texture Transform Attention network that better produces the missing region inpainting with fine details.
Texture Transform Attention is used to create a new reassembled texture map using fine textures and coarse semantics.
We evaluate our model end-to-end with the publicly available datasets CelebA-HQ and Places2.
arXiv Detail & Related papers (2020-12-08T06:28:51Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed
Scenes [54.836331922449666]
We propose a Semantic Guidance and Evaluation Network (SGE-Net) to update the structural priors and the inpainted image.
It utilizes semantic segmentation map as guidance in each scale of inpainting, under which location-dependent inferences are re-evaluated.
Experiments on real-world images of mixed scenes demonstrated the superiority of our proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-15T17:49:20Z) - Very Long Natural Scenery Image Prediction by Outpainting [96.8509015981031]
Outpainting receives less attention due to two challenges in it.
First challenge is how to keep the spatial and content consistency between generated images and original input.
Second challenge is how to maintain high quality in generated results.
arXiv Detail & Related papers (2019-12-29T16:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.