Reference-Guided Large-Scale Face Inpainting with Identity and Texture
Control
- URL: http://arxiv.org/abs/2303.07014v1
- Date: Mon, 13 Mar 2023 11:22:37 GMT
- Title: Reference-Guided Large-Scale Face Inpainting with Identity and Texture
Control
- Authors: Wuyang Luo, Su Yang, Weishan Zhang
- Abstract summary: Face inpainting aims at plausibly predicting missing pixels of face images within a corrupted region.
Most existing methods rely on generative models learning a face image distribution from a big dataset.
We propose a novel reference-guided face inpainting method that fills the large-scale missing region with identity and texture control guided by a reference face image.
- Score: 4.866431869728018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face inpainting aims at plausibly predicting missing pixels of face images
within a corrupted region. Most existing methods rely on generative models
learning a face image distribution from a big dataset, which produces
uncontrollable results, especially with large-scale missing regions. To
introduce strong control for face inpainting, we propose a novel
reference-guided face inpainting method that fills the large-scale missing
region with identity and texture control guided by a reference face image.
However, generating high-quality results under imposing two control signals is
challenging. To tackle such difficulty, we propose a dual control one-stage
framework that decouples the reference image into two levels for flexible
control: High-level identity information and low-level texture information,
where the identity information figures out the shape of the face and the
texture information depicts the component-aware texture. To synthesize
high-quality results, we design two novel modules referred to as Half-AdaIN and
Component-Wise Style Injector (CWSI) to inject the two kinds of control
information into the inpainting processing. Our method produces realistic
results with identity and texture control faithful to reference images. To the
best of our knowledge, it is the first work to concurrently apply identity and
component-level controls in face inpainting to promise more precise and
controllable results. Code is available at
https://github.com/WuyangLuo/RefFaceInpainting
Related papers
- ControlFace: Harnessing Facial Parametric Control for Face Rigging [31.765503860508378]
We introduce ControlFace, a novel face rigging method conditioned on 3DMM renderings that enables flexible, high-fidelity control.
We employ a dual-branch U-Nets: one, referred to as FaceNet, captures identity and fine details, while the other focuses on generation.
By training on a facial video dataset, we fully utilize FaceNet's rich representations while ensuring control adherence.
arXiv Detail & Related papers (2024-12-02T06:00:27Z) - ENTED: Enhanced Neural Texture Extraction and Distribution for
Reference-based Blind Face Restoration [51.205673783866146]
We present ENTED, a new framework for blind face restoration that aims to restore high-quality and realistic portrait images.
We utilize a texture extraction and distribution framework to transfer high-quality texture features between the degraded input and reference image.
The StyleGAN-like architecture in our framework requires high-quality latent codes to generate realistic images.
arXiv Detail & Related papers (2024-01-13T04:54:59Z) - IA-FaceS: A Bidirectional Method for Semantic Face Editing [8.19063619210761]
This paper proposes a bidirectional method for disentangled face attribute manipulation as well as flexible, controllable component editing.
IA-FaceS is developed for the first time without any input visual guidance, such as segmentation masks or sketches.
Both quantitative and qualitative results indicate that the proposed method outperforms the other techniques in reconstruction, face attribute manipulation, and component transfer.
arXiv Detail & Related papers (2022-03-24T14:44:56Z) - Learning Disentangled Representation for One-shot Progressive Face Swapping [92.09538942684539]
We present a simple yet efficient method named FaceSwapper, for one-shot face swapping based on Generative Adversarial Networks.
Our method consists of a disentangled representation module and a semantic-guided fusion module.
Our method achieves state-of-the-art results on benchmark datasets with fewer training samples.
arXiv Detail & Related papers (2022-03-24T11:19:04Z) - Identity-Guided Face Generation with Multi-modal Contour Conditions [15.84849740726513]
We propose a framework that takes the contour and an extra image specifying the identity as the inputs.
An identity encoder extracts the identity-related feature, accompanied by a main encoder to obtain the rough contour information.
Our method can produce photo-realistic results with 1024$times$1024 resolution.
arXiv Detail & Related papers (2021-10-10T17:08:22Z) - Aggregated Contextual Transformations for High-Resolution Image
Inpainting [57.241749273816374]
We propose an enhanced GAN-based model, named Aggregated COntextual-Transformation GAN (AOT-GAN) for high-resolution image inpainting.
To enhance context reasoning, we construct the generator of AOT-GAN by stacking multiple layers of a proposed AOT block.
For improving texture synthesis, we enhance the discriminator of AOT-GAN by training it with a tailored mask-prediction task.
arXiv Detail & Related papers (2021-04-03T15:50:17Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - S2FGAN: Semantically Aware Interactive Sketch-to-Face Translation [11.724779328025589]
This paper proposes a sketch-to-image generation framework called S2FGAN.
We employ two latent spaces to control the face appearance and adjust the desired attributes of the generated face.
Our method successfully outperforms state-of-the-art methods on attribute manipulation by exploiting greater control of attribute intensity.
arXiv Detail & Related papers (2020-11-30T13:42:39Z) - Face Forgery Detection by 3D Decomposition [72.22610063489248]
We consider a face image as the production of the intervention of the underlying 3D geometry and the lighting environment.
By disentangling the face image into 3D shape, common texture, identity texture, ambient light, and direct light, we find the devil lies in the direct light and the identity texture.
We propose to utilize facial detail, which is the combination of direct light and identity texture, as the clue to detect the subtle forgery patterns.
arXiv Detail & Related papers (2020-11-19T09:25:44Z) - Reference-guided Face Component Editing [51.29105560090321]
We propose a novel framework termed r-FACE (Reference-guided FAce Component Editing) for diverse and controllable face component editing.
Specifically, r-FACE takes an image inpainting model as the backbone, utilizing reference images as conditions for controlling the shape of face components.
In order to encourage the framework to concentrate on the target face components, an example-guided attention module is designed to fuse attention features and the target face component features extracted from the reference image.
arXiv Detail & Related papers (2020-06-03T05:34:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.