High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling
- URL: http://arxiv.org/abs/2005.11742v2
- Date: Tue, 14 Jul 2020 05:52:30 GMT
- Title: High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling
- Authors: Yu Zeng, Zhe Lin, Jimei Yang, Jianming Zhang, Eli Shechtman, Huchuan
Lu
- Abstract summary: Existing image inpainting methods often produce artifacts when dealing with large holes in real applications.
We propose an iterative inpainting method with a feedback mechanism.
Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations.
- Score: 122.06593036862611
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing image inpainting methods often produce artifacts when dealing with
large holes in real applications. To address this challenge, we propose an
iterative inpainting method with a feedback mechanism. Specifically, we
introduce a deep generative model which not only outputs an inpainting result
but also a corresponding confidence map. Using this map as feedback, it
progressively fills the hole by trusting only high-confidence pixels inside the
hole at each iteration and focuses on the remaining pixels in the next
iteration. As it reuses partial predictions from the previous iterations as
known pixels, this process gradually improves the result. In addition, we
propose a guided upsampling network to enable generation of high-resolution
inpainting results. We achieve this by extending the Contextual Attention
module to borrow high-resolution feature patches in the input image.
Furthermore, to mimic real object removal scenarios, we collect a large object
mask dataset and synthesize more realistic training data that better simulates
user inputs. Experiments show that our method significantly outperforms
existing methods in both quantitative and qualitative evaluations. More results
and Web APP are available at https://zengxianyu.github.io/iic.
Related papers
- AccDiffusion: An Accurate Method for Higher-Resolution Image Generation [63.53163540340026]
We propose AccDiffusion, an accurate method for patch-wise higher-resolution image generation without training.
An in-depth analysis in this paper reveals an identical text prompt for different patches causes repeated object generation.
Our AccDiffusion, for the first time, proposes to decouple the vanilla image-content-aware prompt into a set of patch-content-aware prompts.
arXiv Detail & Related papers (2024-07-15T14:06:29Z) - DiffUHaul: A Training-Free Method for Object Dragging in Images [78.93531472479202]
We propose a training-free method, dubbed DiffUHaul, for the object dragging task.
We first apply attention masking in each denoising step to make the generation more disentangled across different objects.
In the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance.
arXiv Detail & Related papers (2024-06-03T17:59:53Z) - Magicremover: Tuning-free Text-guided Image inpainting with Diffusion
Models [24.690863845885367]
We propose MagicRemover, a tuning-free method that leverages the powerful diffusion models for text-guided image inpainting.
We introduce an attention guidance strategy to constrain the sampling process of diffusion models, enabling the erasing of instructed areas and the restoration of occluded content.
arXiv Detail & Related papers (2023-10-04T14:34:11Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [59.968362815126326]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - SuperInpaint: Learning Detail-Enhanced Attentional Implicit
Representation for Super-resolutional Image Inpainting [26.309834304515544]
We introduce a challenging image restoration task, referred to as SuperInpaint.
This task aims to reconstruct missing regions in low-resolution images and generate completed images with arbitrarily higher resolutions.
We propose the detail-enhanced attentional implicit representation that can achieve SuperInpaint with a single model.
arXiv Detail & Related papers (2023-07-26T20:28:58Z) - Masked Image Training for Generalizable Deep Image Denoising [53.03126421917465]
We present a novel approach to enhance the generalization performance of denoising networks.
Our method involves masking random pixels of the input image and reconstructing the missing information during training.
Our approach exhibits better generalization ability than other deep learning models and is directly applicable to real-world scenarios.
arXiv Detail & Related papers (2023-03-23T09:33:44Z) - High-resolution Iterative Feedback Network for Camouflaged Object
Detection [128.893782016078]
Spotting camouflaged objects that are visually assimilated into the background is tricky for object detection algorithms.
We aim to extract the high-resolution texture details to avoid the detail degradation that causes blurred vision in edges and boundaries.
We introduce a novel HitNet to refine the low-resolution representations by high-resolution features in an iterative feedback manner.
arXiv Detail & Related papers (2022-03-22T11:20:21Z) - Image Inpainting Using AutoEncoder and Guided Selection of Predicted
Pixels [9.527576103168984]
In this paper, we propose a network for image inpainting. This network, similar to U-Net, extracts various features from images, leading to better results.
We improved the final results by replacing the damaged pixels with the recovered pixels of the output images.
arXiv Detail & Related papers (2021-12-17T00:10:34Z) - Noise Doesn't Lie: Towards Universal Detection of Deep Inpainting [42.189768203036394]
We make the first attempt towards universal detection of deep inpainting, where the detection network can generalize well.
Our approach outperforms existing detection methods by a large margin and generalizes well to unseen deep inpainting techniques.
arXiv Detail & Related papers (2021-06-03T01:29:29Z) - Deep Two-Stage High-Resolution Image Inpainting [0.0]
In this article, we propose a method that solves the problem of inpainting arbitrary-size images.
For this, we propose to use information from neighboring pixels by shifting the original image in four directions.
This approach can work with existing inpainting models, making them almost resolution independent without the need for retraining.
arXiv Detail & Related papers (2021-04-27T20:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.