POCE: Pose-Controllable Expression Editing
- URL: http://arxiv.org/abs/2304.08938v1
- Date: Tue, 18 Apr 2023 12:26:19 GMT
- Title: POCE: Pose-Controllable Expression Editing
- Authors: Rongliang Wu, Yingchen Yu, Fangneng Zhan, Jiahui Zhang, Shengcai Liao,
Shijian Lu
- Abstract summary: This paper presents POCE, an innovative pose-controllable expression editing network.
It can generate realistic facial expressions and head poses simultaneously with just unpaired training images.
The learned model can generate realistic and high-fidelity facial expressions under various new poses.
- Score: 75.7701103792032
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial expression editing has attracted increasing attention with the advance
of deep neural networks in recent years. However, most existing methods suffer
from compromised editing fidelity and limited usability as they either ignore
pose variations (unrealistic editing) or require paired training data (not easy
to collect) for pose controls. This paper presents POCE, an innovative
pose-controllable expression editing network that can generate realistic facial
expressions and head poses simultaneously with just unpaired training images.
POCE achieves the more accessible and realistic pose-controllable expression
editing by mapping face images into UV space, where facial expressions and head
poses can be disentangled and edited separately. POCE has two novel designs.
The first is self-supervised UV completion that allows to complete UV maps
sampled under different head poses, which often suffer from self-occlusions and
missing facial texture. The second is weakly-supervised UV editing that allows
to generate new facial expressions with minimal modification of facial
identity, where the synthesized expression could be controlled by either an
expression label or directly transplanted from a reference UV map via feature
transfer. Extensive experiments show that POCE can learn from unpaired face
images effectively, and the learned model can generate realistic and
high-fidelity facial expressions under various new poses.
Related papers
- Towards Localized Fine-Grained Control for Facial Expression Generation [54.82883891478555]
Humans, particularly their faces, are central to content generation due to their ability to convey rich expressions and intent.
Current generative models mostly generate flat neutral expressions and characterless smiles without authenticity.
We propose the use of AUs (action units) for facial expression control in face generation.
arXiv Detail & Related papers (2024-07-25T18:29:48Z) - MagicPose: Realistic Human Poses and Facial Expressions Retargeting with Identity-aware Diffusion [22.62170098534097]
We propose MagicPose, a diffusion-based model for 2D human pose and facial expression.
By leveraging the prior knowledge of image diffusion models, MagicPose generalizes well to unseen human identities and complex poses.
The proposed model is easy to use and can be considered as a plug-in module/extension to Stable Diffusion.
arXiv Detail & Related papers (2023-11-18T10:22:44Z) - AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - DPE: Disentanglement of Pose and Expression for General Video Portrait
Editing [30.1002454931945]
One-shot video-driven talking face generation aims at producing a synthetic talking video by transferring the facial motion from a video to an arbitrary portrait image.
In this paper, we introduce a novel self-supervised disentanglement framework to decouple pose and expression without 3DMMs and paired data.
arXiv Detail & Related papers (2023-01-16T06:39:51Z) - Pixel Sampling for Style Preserving Face Pose Editing [53.14006941396712]
We present a novel two-stage approach to solve the dilemma, where the task of face pose manipulation is cast into face inpainting.
By selectively sampling pixels from the input face and slightly adjust their relative locations, the face editing result faithfully keeps the identity information as well as the image style unchanged.
With the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing.
arXiv Detail & Related papers (2021-06-14T11:29:29Z) - PhotoApp: Photorealistic Appearance Editing of Head Portraits [97.23638022484153]
We present an approach for high-quality intuitive editing of the camera viewpoint and scene illumination in a portrait image.
Most editing approaches rely on supervised learning using training data captured with setups such as light and camera stages.
We design a supervised problem which learns in the latent space of StyleGAN.
This combines the best of supervised learning and generative adversarial modeling.
arXiv Detail & Related papers (2021-03-13T08:59:49Z) - HeadGAN: One-shot Neural Head Synthesis and Editing [70.30831163311296]
HeadGAN is a system that synthesises on 3D face representations and adapted to the facial geometry of any reference image.
The 3D face representation enables HeadGAN to be further used as an efficient method for compression and reconstruction and a tool for expression and pose editing.
arXiv Detail & Related papers (2020-12-15T12:51:32Z) - Facial UV Map Completion for Pose-invariant Face Recognition: A Novel
Adversarial Approach based on Coupled Attention Residual UNets [3.999563862575646]
We propose a novel generative model called Attention ResCUNet-GAN to improve the UV map completion.
We show that the proposed method yields superior performance compared to other existing methods.
arXiv Detail & Related papers (2020-11-02T11:46:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.