Texture Transform Attention for Realistic Image Inpainting
- URL: http://arxiv.org/abs/2012.04242v1
- Date: Tue, 8 Dec 2020 06:28:51 GMT
- Title: Texture Transform Attention for Realistic Image Inpainting
- Authors: Yejin Kim and Manri Cheon and Junwoo Lee
- Abstract summary: We propose a Texture Transform Attention network that better produces the missing region inpainting with fine details.
Texture Transform Attention is used to create a new reassembled texture map using fine textures and coarse semantics.
We evaluate our model end-to-end with the publicly available datasets CelebA-HQ and Places2.
- Score: 6.275013056564918
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the last few years, the performance of inpainting to fill missing
regions has shown significant improvements by using deep neural networks. Most
of inpainting work create a visually plausible structure and texture, however,
due to them often generating a blurry result, final outcomes appear unrealistic
and make feel heterogeneity. In order to solve this problem, the existing
methods have used a patch based solution with deep neural network, however,
these methods also cannot transfer the texture properly. Motivated by these
observation, we propose a patch based method. Texture Transform Attention
network(TTA-Net) that better produces the missing region inpainting with fine
details. The task is a single refinement network and takes the form of U-Net
architecture that transfers fine texture features of encoder to coarse semantic
features of decoder through skip-connection. Texture Transform Attention is
used to create a new reassembled texture map using fine textures and coarse
semantics that can efficiently transfer texture information as a result. To
stabilize training process, we use a VGG feature layer of ground truth and
patch discriminator. We evaluate our model end-to-end with the publicly
available datasets CelebA-HQ and Places2 and demonstrate that images of higher
quality can be obtained to the existing state-of-the-art approaches.
Related papers
- Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - Semantic Image Translation for Repairing the Texture Defects of Building
Models [16.764719266178655]
We introduce a novel approach for synthesizing faccade texture images that authentically reflect the architectural style from a structured label map.
Our proposed method is also capable of synthesizing texture images with specific styles for faccades that lack pre-existing textures.
arXiv Detail & Related papers (2023-03-30T14:38:53Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - Delving Globally into Texture and Structure for Image Inpainting [20.954875933730808]
Image inpainting has achieved remarkable progress and inspired abundant methods, where the critical bottleneck is identified as how to fulfill the high-frequency structure and low-frequency texture information on the masked regions with semantics.
In this paper, we delve globally into texture and structure information to well capture the semantics for image inpainting.
Our model is tovolution to the fashionable arts, such as Conal Neural Networks (CNNs), Attention and Transformer model, from the perspective of texture and structure information for image inpainting.
arXiv Detail & Related papers (2022-09-17T02:19:26Z) - DAM-GAN : Image Inpainting using Dynamic Attention Map based on Fake
Texture Detection [6.872690425240007]
We introduce a GAN-based model using dynamic attention map (DAM-GAN)
Our proposed DAM-GAN concentrates on detecting fake texture and products dynamic attention maps to diminish pixel inconsistency from the feature maps in the generator.
Evaluation results on CelebA-HQ and Places2 datasets show the superiority of our network.
arXiv Detail & Related papers (2022-04-20T13:15:52Z) - Adaptive Image Inpainting [43.02281823557039]
Inpainting methods have shown significant improvements by using deep neural networks.
The problem is rooted in the encoder layers' ineffectiveness in building a complete and faithful embedding of the missing regions.
We propose a distillation based approach for inpainting, where we provide direct feature level supervision for the encoder layers.
arXiv Detail & Related papers (2022-01-01T12:16:01Z) - Semantic Layout Manipulation with High-Resolution Sparse Attention [106.59650698907953]
We tackle the problem of semantic image layout manipulation, which aims to manipulate an input image by editing its semantic label map.
A core problem of this task is how to transfer visual details from the input images to the new semantic layout while making the resulting image visually realistic.
We propose a high-resolution sparse attention module that effectively transfers visual details to new layouts at a resolution up to 512x512.
arXiv Detail & Related papers (2020-12-14T06:50:43Z) - Texture Memory-Augmented Deep Patch-Based Image Inpainting [121.41395272974611]
We propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions.
The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network.
The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks.
arXiv Detail & Related papers (2020-09-28T12:09:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.