Restore from Restored: Single-image Inpainting
- URL: http://arxiv.org/abs/2110.12822v1
- Date: Mon, 25 Oct 2021 11:38:51 GMT
- Title: Restore from Restored: Single-image Inpainting
- Authors: Eunhye Lee, Jeongmu Kim, Jisu Kim, Tae Hyun Kim
- Abstract summary: We present a novel and efficient self-supervised fine-tuning algorithm for inpainting networks.
We update the parameters of the pre-trained inpainting networks by utilizing existing self-similar patches.
We achieve state-of-the-art inpainting results on publicly available benchmark datasets.
- Score: 9.699531255678856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent image inpainting methods have shown promising results due to the power
of deep learning, which can explore external information available from the
large training dataset. However, many state-of-the-art inpainting networks are
still limited in exploiting internal information available in the given input
image at test time. To mitigate this problem, we present a novel and efficient
self-supervised fine-tuning algorithm that can adapt the parameters of fully
pre-trained inpainting networks without using ground-truth target images. We
update the parameters of the pre-trained state-of-the-art inpainting networks
by utilizing existing self-similar patches (i.e., self-exemplars) within the
given input image without changing the network architecture and improve the
inpainting quality by a large margin. Qualitative and quantitative experimental
results demonstrate the superiority of the proposed algorithm, and we achieve
state-of-the-art inpainting results on publicly available benchmark datasets.
Related papers
- PrefPaint: Aligning Image Inpainting Diffusion Model with Human Preference [62.72779589895124]
We make the first attempt to align diffusion models for image inpainting with human aesthetic standards via a reinforcement learning framework.
We train a reward model with a dataset we construct, consisting of nearly 51,000 images annotated with human preferences.
Experiments on inpainting comparison and downstream tasks, such as image extension and 3D reconstruction, demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-10-29T11:49:39Z) - Dense Feature Interaction Network for Image Inpainting Localization [28.028361409524457]
Inpainting can be used to conceal or alter image contents in malicious manipulation of images.
Existing methods mostly rely on a basic encoder-decoder structure, which often results in a high number of false positives.
In this paper, we describe a new method for inpainting detection based on a Dense Feature Interaction Network (DeFI-Net)
arXiv Detail & Related papers (2024-08-05T02:35:13Z) - GRIG: Few-Shot Generative Residual Image Inpainting [27.252855062283825]
We present a novel few-shot generative residual image inpainting method that produces high-quality inpainting results.
The core idea is to propose an iterative residual reasoning method that incorporates Convolutional Neural Networks (CNNs) for feature extraction.
We also propose a novel forgery-patch adversarial training strategy to create faithful textures and detailed appearances.
arXiv Detail & Related papers (2023-04-24T12:19:06Z) - Visual Prompting via Image Inpainting [104.98602202198668]
Inspired by prompting in NLP, this paper investigates visual prompting: given input-output image example(s) of a new task at test time and a new input image.
We apply visual prompting to pretrained models and demonstrate results on various downstream image-to-image tasks.
arXiv Detail & Related papers (2022-09-01T17:59:33Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z) - Learning Prior Feature and Attention Enhanced Image Inpainting [63.21231753407192]
This paper incorporates the pre-training based Masked AutoEncoder (MAE) into the inpainting model.
We propose to use attention priors from MAE to make the inpainting model learn more long-distance dependencies between masked and unmasked regions.
arXiv Detail & Related papers (2022-08-03T04:32:53Z) - Restore from Restored: Single-image Inpainting [9.699531255678856]
We present a novel and efficient self-supervised fine-tuning algorithm for inpainting networks.
We upgrade the parameters of the pretrained networks by utilizing existing self-similar patches within the given input image.
We achieve state-of-the-art inpainting results on publicly available benchmark datasets.
arXiv Detail & Related papers (2021-02-16T10:59:28Z) - Image inpainting using frequency domain priors [35.54138025375951]
We present a novel image inpainting technique using frequency domain information.
We evaluate our proposed method on the publicly available datasets CelebA, Paris Streetview, and DTD texture dataset.
arXiv Detail & Related papers (2020-12-03T11:08:13Z) - High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling [122.06593036862611]
Existing image inpainting methods often produce artifacts when dealing with large holes in real applications.
We propose an iterative inpainting method with a feedback mechanism.
Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-05-24T13:23:45Z) - Very Long Natural Scenery Image Prediction by Outpainting [96.8509015981031]
Outpainting receives less attention due to two challenges in it.
First challenge is how to keep the spatial and content consistency between generated images and original input.
Second challenge is how to maintain high quality in generated results.
arXiv Detail & Related papers (2019-12-29T16:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.