Region-aware Attention for Image Inpainting
- URL: http://arxiv.org/abs/2204.01004v1
- Date: Sun, 3 Apr 2022 06:26:22 GMT
- Title: Region-aware Attention for Image Inpainting
- Authors: Zhilin Huang, Chujun Qin, Zhenyu Weng and Yuesheng Zhu
- Abstract summary: We propose a novel region-aware attention (RA) module for inpainting images.
By avoiding the directly calculating corralation between each pixel pair in a single samples, the misleading of invalid information in holes can be avoided.
A learnable region dictionary (LRD) is introduced to store important information in the entire dataset.
Our methodscan generate semantically plausible results with realistic details.
- Score: 33.22497212024083
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent attention-based image inpainting methods have made inspiring progress
by modeling long-range dependencies within a single image. However, they tend
to generate blurry contents since the correlation between each pixel pairs is
always misled by ill-predicted features in holes. To handle this problem, we
propose a novel region-aware attention (RA) module. By avoiding the directly
calculating corralation between each pixel pair in a single samples and
considering the correlation between different samples, the misleading of
invalid information in holes can be avoided. Meanwhile, a learnable region
dictionary (LRD) is introduced to store important information in the entire
dataset, which not only simplifies correlation modeling, but also avoids
information redundancy. By applying RA in our architecture, our methodscan
generate semantically plausible results with realistic details. Extensive
experiments on CelebA, Places2 and Paris StreetView datasets validate the
superiority of our method compared with existing methods.
Related papers
- Learning to Rank Patches for Unbiased Image Redundancy Reduction [80.93989115541966]
Images suffer from heavy spatial redundancy because pixels in neighboring regions are spatially correlated.
Existing approaches strive to overcome this limitation by reducing less meaningful image regions.
We propose a self-supervised framework for image redundancy reduction called Learning to Rank Patches.
arXiv Detail & Related papers (2024-03-31T13:12:41Z) - Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction [4.227116189483428]
This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation framework.
It includes the low-quality image generation in latent space and the high-quality image generation in pixel space.
It minimizes computational costs by moving some inference steps from pixel space to latent space.
arXiv Detail & Related papers (2024-03-14T12:58:28Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [63.54342601757723]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - Towards Coherent Image Inpainting Using Denoising Diffusion Implicit
Models [43.83732051916894]
We propose COPAINT, which can coherently inpaint the whole image without introducing mismatches.
COPAINT also uses the Bayesian framework to jointly modify both revealed and unrevealed regions.
Our experiments verify that COPAINT can outperform the existing diffusion-based methods under both objective and subjective metrics.
arXiv Detail & Related papers (2023-04-06T18:35:13Z) - Towards Effective Image Manipulation Detection with Proposal Contrastive
Learning [61.5469708038966]
We propose Proposal Contrastive Learning (PCL) for effective image manipulation detection.
Our PCL consists of a two-stream architecture by extracting two types of global features from RGB and noise views respectively.
Our PCL can be easily adapted to unlabeled data in practice, which can reduce manual labeling costs and promote more generalizable features.
arXiv Detail & Related papers (2022-10-16T13:30:13Z) - Manifold-Inspired Single Image Interpolation [17.304301226838614]
Many approaches to single image use manifold models to exploit semi-local similarity.
aliasing in the input image makes it challenging for both parts.
We propose a carefully-designed adaptive technique to remove aliasing in severely aliased regions.
This technique enables reliable identification of similar patches even in the presence of strong aliasing.
arXiv Detail & Related papers (2021-07-31T04:29:05Z) - Cross-Scale Internal Graph Neural Network for Image Super-Resolution [147.77050877373674]
Non-local self-similarity in natural images has been well studied as an effective prior in image restoration.
For single image super-resolution (SISR), most existing deep non-local methods only exploit similar patches within the same scale of the low-resolution (LR) input image.
This is achieved using a novel cross-scale internal graph neural network (IGNN)
arXiv Detail & Related papers (2020-06-30T10:48:40Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.