RePaint: Inpainting using Denoising Diffusion Probabilistic Models
- URL: http://arxiv.org/abs/2201.09865v1
- Date: Mon, 24 Jan 2022 18:40:15 GMT
- Title: RePaint: Inpainting using Denoising Diffusion Probabilistic Models
- Authors: Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu
Timofte, Luc Van Gool
- Abstract summary: Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask.
We propose RePaint: A Denoising Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks.
We validate our method for both faces and general-purpose image inpainting using standard and extreme masks.
- Score: 161.74792336127345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Free-form inpainting is the task of adding new content to an image in the
regions specified by an arbitrary binary mask. Most existing approaches train
for a certain distribution of masks, which limits their generalization
capabilities to unseen mask types. Furthermore, training with pixel-wise and
perceptual losses often leads to simple textural extensions towards the missing
areas instead of semantically meaningful generation. In this work, we propose
RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting
approach that is applicable to even extreme masks. We employ a pretrained
unconditional DDPM as the generative prior. To condition the generation
process, we only alter the reverse diffusion iterations by sampling the
unmasked regions using the given image information. Since this technique does
not modify or condition the original DDPM network itself, the model produces
high-quality and diverse output images for any inpainting form. We validate our
method for both faces and general-purpose image inpainting using standard and
extreme masks.
RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for
at least five out of six mask distributions.
Github Repository: git.io/RePaint
Related papers
- DiffGANPaint: Fast Inpainting Using Denoising Diffusion GANs [19.690288425689328]
In this paper, we propose a Denoising Diffusion Probabilistic Model (DDPM) based model capable of filling missing pixels fast.
Experiments on general-purpose image inpainting datasets verify that our approach performs superior or on par with most contemporary works.
arXiv Detail & Related papers (2023-08-03T17:50:41Z) - DFormer: Diffusion-guided Transformer for Universal Image Segmentation [86.73405604947459]
The proposed DFormer views universal image segmentation task as a denoising process using a diffusion model.
At inference, our DFormer directly predicts the masks and corresponding categories from a set of randomly-generated masks.
Our DFormer outperforms the recent diffusion-based panoptic segmentation method Pix2Seq-D with a gain of 3.6% on MS COCO val 2017 set.
arXiv Detail & Related papers (2023-06-06T06:33:32Z) - PaintSeg: Training-free Segmentation via Painting [50.17936803209125]
PaintSeg is a new unsupervised method for segmenting objects without any training.
Inpainting and outpainting are alternated, with the former masking the foreground and filling in the background, and the latter masking the background while recovering the missing part of the foreground object.
Our experimental results demonstrate that PaintSeg outperforms existing approaches in coarse mask-prompt, box-prompt, and point-prompt segmentation tasks.
arXiv Detail & Related papers (2023-05-30T20:43:42Z) - Towards Improved Input Masking for Convolutional Neural Networks [66.99060157800403]
We propose a new masking method for CNNs we call layer masking.
We show that our method is able to eliminate or minimize the influence of the mask shape or color on the output of the model.
We also demonstrate how the shape of the mask may leak information about the class, thus affecting estimates of model reliance on class-relevant features.
arXiv Detail & Related papers (2022-11-26T19:31:49Z) - Semantic-guided Multi-Mask Image Harmonization [10.27974860479791]
We propose a new semantic-guided multi-mask image harmonization task.
In this work, we propose a novel way to edit the inharmonious images by predicting a series of operator masks.
arXiv Detail & Related papers (2022-07-24T11:48:49Z) - Shape-Aware Masking for Inpainting in Medical Imaging [49.61617087640379]
Inpainting has been proposed as a successful deep learning technique for unsupervised medical image model discovery.
We introduce a method for generating shape-aware masks for inpainting, which aims at learning the statistical shape prior.
We propose an unsupervised guided masking approach based on an off-the-shelf inpainting model and a superpixel over-segmentation algorithm.
arXiv Detail & Related papers (2022-07-12T18:35:17Z) - SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [95.45728042499836]
We propose a new paradigm of sketch-based image manipulation: mask-free local image manipulation.
Our model automatically predicts the target modification region and encodes it into a structure style vector.
A generator then synthesizes the new image content based on the style vector and sketch.
arXiv Detail & Related papers (2021-11-30T02:42:31Z) - Learning Sparse Masks for Diffusion-based Image Inpainting [10.633099921979674]
Diffusion-based inpainting is a powerful tool for the reconstruction of images from sparse data.
We provide a model for highly efficient adaptive mask generation.
Experiments indicate that our model can achieve competitive quality with an acceleration by as much as four orders of magnitude.
arXiv Detail & Related papers (2021-10-06T10:20:59Z) - Iterative Facial Image Inpainting using Cyclic Reverse Generator [0.913755431537592]
Cyclic Reverse Generator (CRG) architecture provides an encoder-generator model.
We empirically observed that only a few iterations are sufficient to generate realistic images with the proposed model.
Our method allows applying sketch-based inpaintings, using variety of mask types, and producing multiple and diverse results.
arXiv Detail & Related papers (2021-01-18T12:19:58Z) - R-MNet: A Perceptual Adversarial Network for Image Inpainting [5.471225956329675]
We propose a Wasserstein GAN combined with a new reverse mask operator, namely Reverse Masking Network (R-MNet), a perceptual adversarial network for image inpainting.
We show that our method is able to generalize to high-resolution inpainting task, and further show more realistic outputs that are plausible to the human visual system.
arXiv Detail & Related papers (2020-08-11T10:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.