GRIG: Few-Shot Generative Residual Image Inpainting
- URL: http://arxiv.org/abs/2304.12035v1
- Date: Mon, 24 Apr 2023 12:19:06 GMT
- Title: GRIG: Few-Shot Generative Residual Image Inpainting
- Authors: Wanglong Lu, Xianta Jiang, Xiaogang Jin, Yong-Liang Yang, Minglun
Gong, Tao Wang, Kaijie Shi, and Hanli Zhao
- Abstract summary: We present a novel few-shot generative residual image inpainting method that produces high-quality inpainting results.
The core idea is to propose an iterative residual reasoning method that incorporates Convolutional Neural Networks (CNNs) for feature extraction.
We also propose a novel forgery-patch adversarial training strategy to create faithful textures and detailed appearances.
- Score: 27.252855062283825
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image inpainting is the task of filling in missing or masked region of an
image with semantically meaningful contents. Recent methods have shown
significant improvement in dealing with large-scale missing regions. However,
these methods usually require large training datasets to achieve satisfactory
results and there has been limited research into training these models on a
small number of samples. To address this, we present a novel few-shot
generative residual image inpainting method that produces high-quality
inpainting results. The core idea is to propose an iterative residual reasoning
method that incorporates Convolutional Neural Networks (CNNs) for feature
extraction and Transformers for global reasoning within generative adversarial
networks, along with image-level and patch-level discriminators. We also
propose a novel forgery-patch adversarial training strategy to create faithful
textures and detailed appearances. Extensive evaluations show that our method
outperforms previous methods on the few-shot image inpainting task, both
quantitatively and qualitatively.
Related papers
- Coherent and Multi-modality Image Inpainting via Latent Space Optimization [61.99406669027195]
PILOT (intextbfPainting vtextbfIa textbfLatent textbfOptextbfTimization) is an optimization approach grounded on a novel textitsemantic centralization and textitbackground preservation loss.
Our method searches latent spaces capable of generating inpainted regions that exhibit high fidelity to user-provided prompts while maintaining coherence with the background.
arXiv Detail & Related papers (2024-07-10T19:58:04Z) - BrushNet: A Plug-and-Play Image Inpainting Model with Decomposed
Dual-Branch Diffusion [61.90969199199739]
BrushNet is a novel plug-and-play dual-branch model engineered to embed pixel-level masked image features into any pre-trained DM.
BrushNet's superior performance over existing models across seven key metrics, including image quality, mask region preservation, and textual coherence.
arXiv Detail & Related papers (2024-03-11T17:59:31Z) - Active Generation for Image Classification [45.93535669217115]
We propose to address the efficiency of image generation by focusing on the specific needs and characteristics of the model.
With a central tenet of active learning, our method, named ActGen, takes a training-aware approach to image generation.
arXiv Detail & Related papers (2024-03-11T08:45:31Z) - An Analysis of Generative Methods for Multiple Image Inpainting [4.234843176066354]
Inpainting refers to the restoration of an image with missing regions in a way that is not detectable by the observer.
We focus on learning-based image completion methods for multiple and diverse inpainting.
arXiv Detail & Related papers (2022-05-04T15:54:08Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Improve Deep Image Inpainting by Emphasizing the Complexity of Missing
Regions [20.245637164975594]
In this paper, we enhance the deep image inpainting models with the help of classical image complexity metrics.
A knowledge-assisted index composed of missingness complexity and forward loss is presented to guide the batch selection in the training procedure.
We experimentally demonstrate the improvements for several recently developed image inpainting models on various datasets.
arXiv Detail & Related papers (2022-02-13T09:14:52Z) - Restore from Restored: Single-image Inpainting [9.699531255678856]
We present a novel and efficient self-supervised fine-tuning algorithm for inpainting networks.
We upgrade the parameters of the pretrained networks by utilizing existing self-similar patches within the given input image.
We achieve state-of-the-art inpainting results on publicly available benchmark datasets.
arXiv Detail & Related papers (2021-02-16T10:59:28Z) - Learning degraded image classification with restoration data fidelity [0.0]
We investigate the influence of degradation types and levels on four widely-used classification networks.
We propose a novel method leveraging a fidelity map to calibrate the image features obtained by pre-trained networks.
Our results reveal that the proposed method is a promising solution to mitigate the effect caused by image degradation.
arXiv Detail & Related papers (2021-01-23T23:47:03Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.