Cascade EF-GAN: Progressive Facial Expression Editing with Local Focuses
- URL: http://arxiv.org/abs/2003.05905v2
- Date: Wed, 25 Mar 2020 15:08:06 GMT
- Title: Cascade EF-GAN: Progressive Facial Expression Editing with Local Focuses
- Authors: Rongliang Wu, Gongjie Zhang, Shijian Lu, Tao Chen
- Abstract summary: We propose a novel network that performs progressive facial expression editing with local expression focuses.
The introduction of the local focus enables the Cascade EF-GAN to better preserve identity-related features.
In addition, an innovative cascade transformation strategy is designed by dividing a large facial expression transformation into multiple small ones in cascade.
- Score: 49.077232276128754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in Generative Adversarial Nets (GANs) have shown remarkable
improvements for facial expression editing. However, current methods are still
prone to generate artifacts and blurs around expression-intensive regions, and
often introduce undesired overlapping artifacts while handling large-gap
expression transformations such as transformation from furious to laughing. To
address these limitations, we propose Cascade Expression Focal GAN (Cascade
EF-GAN), a novel network that performs progressive facial expression editing
with local expression focuses. The introduction of the local focus enables the
Cascade EF-GAN to better preserve identity-related features and details around
eyes, noses and mouths, which further helps reduce artifacts and blurs within
the generated facial images. In addition, an innovative cascade transformation
strategy is designed by dividing a large facial expression transformation into
multiple small ones in cascade, which helps suppress overlapping artifacts and
produce more realistic editing while dealing with large-gap expression
transformations. Extensive experiments over two publicly available facial
expression datasets show that our proposed Cascade EF-GAN achieves superior
performance for facial expression editing.
Related papers
- E2F-Net: Eyes-to-Face Inpainting via StyleGAN Latent Space [4.110419543591102]
We propose a Generative Adversarial Network (GAN)-based model called Eyes-to-Face Network (E2F-Net)
The proposed approach extracts identity and non-identity features from the periocular region using two dedicated encoders.
We show that our method successfully reconstructs the whole face with high quality, surpassing current techniques.
arXiv Detail & Related papers (2024-03-18T19:11:34Z) - GaFET: Learning Geometry-aware Facial Expression Translation from
In-The-Wild Images [55.431697263581626]
We introduce a novel Geometry-aware Facial Expression Translation framework, which is based on parametric 3D facial representations and can stably decoupled expression.
We achieve higher-quality and more accurate facial expression transfer results compared to state-of-the-art methods, and demonstrate applicability of various poses and complex textures.
arXiv Detail & Related papers (2023-08-07T09:03:35Z) - One-Shot High-Fidelity Talking-Head Synthesis with Deformable Neural
Radiance Field [81.07651217942679]
Talking head generation aims to generate faces that maintain the identity information of the source image and imitate the motion of the driving image.
We propose HiDe-NeRF, which achieves high-fidelity and free-view talking-head synthesis.
arXiv Detail & Related papers (2023-04-11T09:47:35Z) - FEAT: Face Editing with Attention [70.89233432407305]
We build on the StyleGAN generator and present a method that explicitly encourages face manipulation to focus on the intended regions.
During the generation of the edited image, the attention map serves as a mask that guides a blending between the original features and the modified ones.
arXiv Detail & Related papers (2022-02-06T06:07:34Z) - LEED: Label-Free Expression Editing via Disentanglement [57.09545215087179]
LEED framework is capable of editing the expression of both frontal and profile facial images without requiring any expression label.
Two novel losses are designed for optimal expression disentanglement and consistent synthesis.
arXiv Detail & Related papers (2020-07-17T13:36:15Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - Fine-Grained Expression Manipulation via Structured Latent Space [30.789513209376032]
We propose an end-to-end expression-guided generative adversarial network (EGGAN) to manipulate fine-grained expressions.
Our method can manipulate fine-grained expressions, and generate continuous intermediate expressions between source and target expressions.
arXiv Detail & Related papers (2020-04-21T06:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.