High-Fidelity and Arbitrary Face Editing
- URL: http://arxiv.org/abs/2103.15814v1
- Date: Mon, 29 Mar 2021 17:59:50 GMT
- Title: High-Fidelity and Arbitrary Face Editing
- Authors: Yue Gao, Fangyun Wei, Jianmin Bao, Shuyang Gu, Dong Chen, Fang Wen,
Zhouhui Lian
- Abstract summary: Cycle consistency is widely used for face editing.
We propose a simple yet effective method named HifaFace to address the problem.
Powered by the proposed framework, we achieve high-fidelity and arbitrary face editing.
- Score: 46.40847958602942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cycle consistency is widely used for face editing. However, we observe that
the generator tends to find a tricky way to hide information from the original
image to satisfy the constraint of cycle consistency, making it impossible to
maintain the rich details (e.g., wrinkles and moles) of non-editing areas. In
this work, we propose a simple yet effective method named HifaFace to address
the above-mentioned problem from two perspectives. First, we relieve the
pressure of the generator to synthesize rich details by directly feeding the
high-frequency information of the input image into the end of the generator.
Second, we adopt an additional discriminator to encourage the generator to
synthesize rich details. Specifically, we apply wavelet transformation to
transform the image into multi-frequency domains, among which the
high-frequency parts can be used to recover the rich details. We also notice
that a fine-grained and wider-range control for the attribute is of great
importance for face editing. To achieve this goal, we propose a novel attribute
regression loss. Powered by the proposed framework, we achieve high-fidelity
and arbitrary face editing, outperforming other state-of-the-art approaches.
Related papers
- CLR-Face: Conditional Latent Refinement for Blind Face Restoration Using
Score-Based Diffusion Models [57.9771859175664]
Recent generative-prior-based methods have shown promising blind face restoration performance.
Generating fine-grained facial details faithful to inputs remains a challenging problem.
We introduce a diffusion-based-prior inside a VQGAN architecture that focuses on learning the distribution over uncorrupted latent embeddings.
arXiv Detail & Related papers (2024-02-08T23:51:49Z) - SARGAN: Spatial Attention-based Residuals for Facial Expression
Manipulation [1.7056768055368383]
We present a novel method named SARGAN that addresses the limitations from three perspectives.
We exploited a symmetric encoder-decoder network to attend facial features at multiple scales.
Our proposed model performs significantly better than state-of-the-art methods.
arXiv Detail & Related papers (2023-03-30T08:15:18Z) - Gradient Adjusting Networks for Domain Inversion [82.72289618025084]
StyleGAN2 was demonstrated to be a powerful image generation engine that supports semantic editing.
We present a per-image optimization method that tunes a StyleGAN2 generator such that it achieves a local edit to the generator's weights.
Our experiments show a sizable gap in performance over the current state of the art in this very active domain.
arXiv Detail & Related papers (2023-02-22T14:47:57Z) - High-resolution Face Swapping via Latent Semantics Disentanglement [50.23624681222619]
We present a novel high-resolution hallucination face swapping method using the inherent prior knowledge of a pre-trained GAN model.
We explicitly disentangle the latent semantics by utilizing the progressive nature of the generator.
We extend our method to video face swapping by enforcing two-temporal constraints on the latent space and the image space.
arXiv Detail & Related papers (2022-03-30T00:33:08Z) - Identity-Guided Face Generation with Multi-modal Contour Conditions [15.84849740726513]
We propose a framework that takes the contour and an extra image specifying the identity as the inputs.
An identity encoder extracts the identity-related feature, accompanied by a main encoder to obtain the rough contour information.
Our method can produce photo-realistic results with 1024$times$1024 resolution.
arXiv Detail & Related papers (2021-10-10T17:08:22Z) - High-Fidelity GAN Inversion for Image Attribute Editing [44.54180180869355]
We present a novel high-fidelity generative adversarial network (GAN) inversion framework that enables attribute editing with image-specific details well-preserved.
To achieve high-fidelity editing, we propose an adaptive distortion alignment (ADA) module with a self-supervised training scheme.
arXiv Detail & Related papers (2021-09-14T11:23:48Z) - Pivotal Tuning for Latent-based Editing of Real Images [40.22151052441958]
A surge of advanced facial editing techniques have been proposed that leverage the generative power of a pre-trained StyleGAN.
To successfully edit an image this way, one must first project (or invert) the image into the pre-trained generator's domain.
This means it is still challenging to apply ID-preserving facial latent-space editing to faces which are out of the generator's domain.
arXiv Detail & Related papers (2021-06-10T13:47:59Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - Disentangled Image Generation Through Structured Noise Injection [48.956122902434444]
We show that disentanglement in the first layer of the generator network leads to disentanglement in the generated image.
We achieve spatial disentanglement, scale-space disentanglement, and disentanglement of the foreground object from the background style.
This empirically leads to better disentanglement scores than state-of-the-art methods on the FFHQ dataset.
arXiv Detail & Related papers (2020-04-26T15:15:19Z) - Face Attribute Invertion [0.0]
We propose a novel self-perception method based on GANs for automatical face attribute inverse.
Our model is quite stable in training and capable of preserving finer details of the original face images.
arXiv Detail & Related papers (2020-01-14T08:41:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.