Restore from Restored: Single-image Inpainting
- URL: http://arxiv.org/abs/2110.12822v1
- Date: Mon, 25 Oct 2021 11:38:51 GMT
- Title: Restore from Restored: Single-image Inpainting
- Authors: Eunhye Lee, Jeongmu Kim, Jisu Kim, Tae Hyun Kim
- Abstract summary: We present a novel and efficient self-supervised fine-tuning algorithm for inpainting networks.
We update the parameters of the pre-trained inpainting networks by utilizing existing self-similar patches.
We achieve state-of-the-art inpainting results on publicly available benchmark datasets.
- Score: 9.699531255678856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent image inpainting methods have shown promising results due to the power
of deep learning, which can explore external information available from the
large training dataset. However, many state-of-the-art inpainting networks are
still limited in exploiting internal information available in the given input
image at test time. To mitigate this problem, we present a novel and efficient
self-supervised fine-tuning algorithm that can adapt the parameters of fully
pre-trained inpainting networks without using ground-truth target images. We
update the parameters of the pre-trained state-of-the-art inpainting networks
by utilizing existing self-similar patches (i.e., self-exemplars) within the
given input image without changing the network architecture and improve the
inpainting quality by a large margin. Qualitative and quantitative experimental
results demonstrate the superiority of the proposed algorithm, and we achieve
state-of-the-art inpainting results on publicly available benchmark datasets.
Related papers
- GRIG: Few-Shot Generative Residual Image Inpainting [27.252855062283825]
We present a novel few-shot generative residual image inpainting method that produces high-quality inpainting results.
The core idea is to propose an iterative residual reasoning method that incorporates Convolutional Neural Networks (CNNs) for feature extraction.
We also propose a novel forgery-patch adversarial training strategy to create faithful textures and detailed appearances.
arXiv Detail & Related papers (2023-04-24T12:19:06Z) - Learning to Scale Temperature in Masked Self-Attention for Image
Inpainting [11.52934596799707]
We present an image inpainting framework with a multi-head temperature masked self-attention mechanism.
In addition to improving image quality of inpainting results, we generalize the proposed model to user-guided image editing by introducing a new sketch generation method.
arXiv Detail & Related papers (2023-02-13T06:37:17Z) - Visual Prompting via Image Inpainting [104.98602202198668]
Inspired by prompting in NLP, this paper investigates visual prompting: given input-output image example(s) of a new task at test time and a new input image.
We apply visual prompting to pretrained models and demonstrate results on various downstream image-to-image tasks.
arXiv Detail & Related papers (2022-09-01T17:59:33Z) - Perceptual Artifacts Localization for Inpainting [60.5659086595901]
We propose a new learning task of automatic segmentation of inpainting perceptual artifacts.
We train advanced segmentation networks on a dataset to reliably localize inpainting artifacts within inpainted images.
We also propose a new evaluation metric called Perceptual Artifact Ratio (PAR), which is the ratio of objectionable inpainted regions to the entire inpainted area.
arXiv Detail & Related papers (2022-08-05T18:50:51Z) - Learning Prior Feature and Attention Enhanced Image Inpainting [63.21231753407192]
This paper incorporates the pre-training based Masked AutoEncoder (MAE) into the inpainting model.
We propose to use attention priors from MAE to make the inpainting model learn more long-distance dependencies between masked and unmasked regions.
arXiv Detail & Related papers (2022-08-03T04:32:53Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Restore from Restored: Single-image Inpainting [9.699531255678856]
We present a novel and efficient self-supervised fine-tuning algorithm for inpainting networks.
We upgrade the parameters of the pretrained networks by utilizing existing self-similar patches within the given input image.
We achieve state-of-the-art inpainting results on publicly available benchmark datasets.
arXiv Detail & Related papers (2021-02-16T10:59:28Z) - Image inpainting using frequency domain priors [35.54138025375951]
We present a novel image inpainting technique using frequency domain information.
We evaluate our proposed method on the publicly available datasets CelebA, Paris Streetview, and DTD texture dataset.
arXiv Detail & Related papers (2020-12-03T11:08:13Z) - High-Resolution Image Inpainting with Iterative Confidence Feedback and
Guided Upsampling [122.06593036862611]
Existing image inpainting methods often produce artifacts when dealing with large holes in real applications.
We propose an iterative inpainting method with a feedback mechanism.
Experiments show that our method significantly outperforms existing methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2020-05-24T13:23:45Z) - Very Long Natural Scenery Image Prediction by Outpainting [96.8509015981031]
Outpainting receives less attention due to two challenges in it.
First challenge is how to keep the spatial and content consistency between generated images and original input.
Second challenge is how to maintain high quality in generated results.
arXiv Detail & Related papers (2019-12-29T16:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.