DIFAI: Diverse Facial Inpainting using StyleGAN Inversion
- URL: http://arxiv.org/abs/2301.08443v1
- Date: Fri, 20 Jan 2023 06:51:34 GMT
- Title: DIFAI: Diverse Facial Inpainting using StyleGAN Inversion
- Authors: Dongsik Yoon, Jeong-gi Kwak, Yuanming Li, David Han and Hanseok Ko
- Abstract summary: We propose a novel framework for diverse facial inpainting exploiting the embedding space of StyleGAN.
Our framework employs pSp encoder and SeFa algorithm to identify semantic components of the StyleGAN embeddings and feed them into our proposed SPARN decoder.
- Score: 18.400846952014188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Image inpainting is an old problem in computer vision that restores occluded
regions and completes damaged images. In the case of facial image inpainting,
most of the methods generate only one result for each masked image, even though
there are other reasonable possibilities. To prevent any potential biases and
unnatural constraints stemming from generating only one image, we propose a
novel framework for diverse facial inpainting exploiting the embedding space of
StyleGAN. Our framework employs pSp encoder and SeFa algorithm to identify
semantic components of the StyleGAN embeddings and feed them into our proposed
SPARN decoder that adopts region normalization for plausible inpainting. We
demonstrate that our proposed method outperforms several state-of-the-art
methods.
Related papers
- Sketch-guided Image Inpainting with Partial Discrete Diffusion Process [5.005162730122933]
We introduce a novel partial discrete diffusion process (PDDP) for sketch-guided inpainting.
PDDP corrupts the masked regions of the image and reconstructs these masked regions conditioned on hand-drawn sketches.
The proposed novel transformer module accepts two inputs -- the image containing the masked region to be inpainted and the query sketch to model the reverse diffusion process.
arXiv Detail & Related papers (2024-04-18T07:07:38Z) - Panoramic Image Inpainting With Gated Convolution And Contextual
Reconstruction Loss [19.659176149635417]
We propose a panoramic image inpainting framework that consists of a Face Generator, a Cube Generator, a side branch, and two discriminators.
The proposed method is compared with state-of-the-art (SOTA) methods on SUN360 Street View dataset in terms of PSNR and SSIM.
arXiv Detail & Related papers (2024-02-05T11:58:08Z) - Stroke-based Neural Painting and Stylization with Dynamically Predicted
Painting Region [66.75826549444909]
Stroke-based rendering aims to recreate an image with a set of strokes.
We propose Compositional Neural Painter, which predicts the painting region based on the current canvas.
We extend our method to stroke-based style transfer with a novel differentiable distance transform loss.
arXiv Detail & Related papers (2023-09-07T06:27:39Z) - PaintSeg: Training-free Segmentation via Painting [50.17936803209125]
PaintSeg is a new unsupervised method for segmenting objects without any training.
Inpainting and outpainting are alternated, with the former masking the foreground and filling in the background, and the latter masking the background while recovering the missing part of the foreground object.
Our experimental results demonstrate that PaintSeg outperforms existing approaches in coarse mask-prompt, box-prompt, and point-prompt segmentation tasks.
arXiv Detail & Related papers (2023-05-30T20:43:42Z) - Semantics-Guided Object Removal for Facial Images: with Broad
Applicability and Robust Style Preservation [29.162655333387452]
Object removal and image inpainting in facial images is a task in which objects that occlude a facial image are specifically targeted, removed, and replaced by a properly reconstructed facial image.
Two different approaches utilizing U-net and modulated generator respectively have been widely endorsed for this task for their unique advantages but notwithstanding each method's innate disadvantages.
Here, we propose Semantics-Guided Inpainting Network (SGIN) which itself is a modification of the modulated generator, aiming to take advantage of its advanced generative capability and preserve the high-fidelity details of the original image.
arXiv Detail & Related papers (2022-09-29T00:09:12Z) - RePaint: Inpainting using Denoising Diffusion Probabilistic Models [161.74792336127345]
Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask.
We propose RePaint: A Denoising Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks.
We validate our method for both faces and general-purpose image inpainting using standard and extreme masks.
arXiv Detail & Related papers (2022-01-24T18:40:15Z) - In&Out : Diverse Image Outpainting via GAN Inversion [89.84841983778672]
Image outpainting seeks for a semantically consistent extension of the input image beyond its available content.
In this work, we formulate the problem from the perspective of inverting generative adversarial networks.
Our generator renders micro-patches conditioned on their joint latent code as well as their individual positions in the image.
arXiv Detail & Related papers (2021-04-01T17:59:10Z) - Iterative Facial Image Inpainting using Cyclic Reverse Generator [0.913755431537592]
Cyclic Reverse Generator (CRG) architecture provides an encoder-generator model.
We empirically observed that only a few iterations are sufficient to generate realistic images with the proposed model.
Our method allows applying sketch-based inpaintings, using variety of mask types, and producing multiple and diverse results.
arXiv Detail & Related papers (2021-01-18T12:19:58Z) - Free-Form Image Inpainting via Contrastive Attention Network [64.05544199212831]
In image inpainting tasks, masks with any shapes can appear anywhere in images which form complex patterns.
It is difficult for encoders to capture such powerful representations under this complex situation.
We propose a self-supervised Siamese inference network to improve the robustness and generalization.
arXiv Detail & Related papers (2020-10-29T14:46:05Z) - Exploiting Deep Generative Prior for Versatile Image Restoration and
Manipulation [181.08127307338654]
This work presents an effective way to exploit the image prior captured by a generative adversarial network (GAN) trained on large-scale natural images.
The deep generative prior (DGP) provides compelling results to restore missing semantics, e.g., color, patch, resolution, of various degraded images.
arXiv Detail & Related papers (2020-03-30T17:45:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.