SuperInpaint: Learning Detail-Enhanced Attentional Implicit
Representation for Super-resolutional Image Inpainting
- URL: http://arxiv.org/abs/2307.14489v1
- Date: Wed, 26 Jul 2023 20:28:58 GMT
- Title: SuperInpaint: Learning Detail-Enhanced Attentional Implicit
Representation for Super-resolutional Image Inpainting
- Authors: Canyu Zhang, Qing Guo, Xiaoguang Li, Renjie Wan, Hongkai Yu, Ivor
Tsang, Song Wang
- Abstract summary: We introduce a challenging image restoration task, referred to as SuperInpaint.
This task aims to reconstruct missing regions in low-resolution images and generate completed images with arbitrarily higher resolutions.
We propose the detail-enhanced attentional implicit representation that can achieve SuperInpaint with a single model.
- Score: 26.309834304515544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we introduce a challenging image restoration task, referred to
as SuperInpaint, which aims to reconstruct missing regions in low-resolution
images and generate completed images with arbitrarily higher resolutions. We
have found that this task cannot be effectively addressed by stacking
state-of-the-art super-resolution and image inpainting methods as they amplify
each other's flaws, leading to noticeable artifacts. To overcome these
limitations, we propose the detail-enhanced attentional implicit representation
(DEAR) that can achieve SuperInpaint with a single model, resulting in
high-quality completed images with arbitrary resolutions. Specifically, we use
a deep convolutional network to extract the latent embedding of an input image
and then enhance the high-frequency components of the latent embedding via an
adaptive high-pass filter. This leads to detail-enhanced semantic embedding. We
further feed the semantic embedding into an unmask-attentional module that
suppresses embeddings from ineffective masked pixels. Additionally, we extract
a pixel-wise importance map that indicates which pixels should be used for
image reconstruction. Given the coordinates of a pixel we want to reconstruct,
we first collect its neighboring pixels in the input image and extract their
detail-enhanced semantic embeddings, unmask-attentional semantic embeddings,
importance values, and spatial distances to the desired pixel. Then, we feed
all the above terms into an implicit representation and generate the color of
the specified pixel. To evaluate our method, we extend three existing datasets
for this new task and build 18 meaningful baselines using SOTA inpainting and
super-resolution methods. Extensive experimental results demonstrate that our
method outperforms all existing methods by a significant margin on four widely
used metrics.
Related papers
- Unsupervised Superpixel Generation using Edge-Sparse Embedding [18.92698251515116]
partitioning an image into superpixels based on the similarity of pixels with respect to features can significantly reduce data complexity and improve subsequent image processing tasks.
We propose a non-convolutional image decoder to reduce the expected number of contrasts and enforce smooth, connected edges in the reconstructed image.
We generate edge-sparse pixel embeddings by encoding additional spatial information into the piece-wise smooth activation maps from the decoder's last hidden layer and use a standard clustering algorithm to extract high quality superpixels.
arXiv Detail & Related papers (2022-11-28T15:55:05Z) - Single Image Super-Resolution via a Dual Interactive Implicit Neural
Network [5.331665215168209]
We introduce a novel implicit neural network for the task of single image super-resolution at arbitrary scale factors.
We demonstrate the efficacy and flexibility of our approach against the state of the art on publicly available benchmark datasets.
arXiv Detail & Related papers (2022-10-23T02:05:19Z) - Deep Two-Stage High-Resolution Image Inpainting [0.0]
In this article, we propose a method that solves the problem of inpainting arbitrary-size images.
For this, we propose to use information from neighboring pixels by shifting the original image in four directions.
This approach can work with existing inpainting models, making them almost resolution independent without the need for retraining.
arXiv Detail & Related papers (2021-04-27T20:32:21Z) - Image Inpainting with Edge-guided Learnable Bidirectional Attention Maps [85.67745220834718]
We present an edge-guided learnable bidirectional attention map (Edge-LBAM) for improving image inpainting of irregular holes.
Our Edge-LBAM method contains dual procedures,including structure-aware mask-updating guided by predict edges.
Extensive experiments show that our Edge-LBAM is effective in generating coherent image structures and preventing color discrepancy and blurriness.
arXiv Detail & Related papers (2021-04-25T07:25:16Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z) - TransFill: Reference-guided Image Inpainting by Merging Multiple Color
and Spatial Transformations [35.9576572490994]
We propose TransFill, a multi-homography transformed fusion method to fill the hole by referring to another source image that shares scene contents with the target image.
We learn to adjust the color and apply a pixel-level warping to each homography-warped source image to make it more consistent with the target.
Our method achieves state-of-the-art performance on pairs of images across a variety of wide baselines and color differences, and generalizes to user-provided image pairs.
arXiv Detail & Related papers (2021-03-29T22:45:07Z) - AINet: Association Implantation for Superpixel Segmentation [82.21559299694555]
We propose a novel textbfAssociation textbfImplantation (AI) module to enable the network to explicitly capture the relations between the pixel and its surrounding grids.
Our method could not only achieve state-of-the-art performance but maintain satisfactory inference efficiency.
arXiv Detail & Related papers (2021-01-26T10:40:13Z) - Semantic Layout Manipulation with High-Resolution Sparse Attention [106.59650698907953]
We tackle the problem of semantic image layout manipulation, which aims to manipulate an input image by editing its semantic label map.
A core problem of this task is how to transfer visual details from the input images to the new semantic layout while making the resulting image visually realistic.
We propose a high-resolution sparse attention module that effectively transfers visual details to new layouts at a resolution up to 512x512.
arXiv Detail & Related papers (2020-12-14T06:50:43Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling [122.06593036862611]
Existing image inpainting methods often produce artifacts when dealing with large holes in real applications.
We propose an iterative inpainting method with a feedback mechanism.
Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-05-24T13:23:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.