Pixel Sampling for Style Preserving Face Pose Editing
- URL: http://arxiv.org/abs/2106.07310v1
- Date: Mon, 14 Jun 2021 11:29:29 GMT
- Title: Pixel Sampling for Style Preserving Face Pose Editing
- Authors: Xiangnan Yin, Di Huang, Hongyu Yang, Zehua Fu, Yunhong Wang, Liming
Chen
- Abstract summary: We present a novel two-stage approach to solve the dilemma, where the task of face pose manipulation is cast into face inpainting.
By selectively sampling pixels from the input face and slightly adjust their relative locations, the face editing result faithfully keeps the identity information as well as the image style unchanged.
With the 3D facial landmarks as guidance, our method is able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and roll, resulting in more flexible face pose editing.
- Score: 53.14006941396712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The existing auto-encoder based face pose editing methods primarily focus on
modeling the identity preserving ability during pose synthesis, but are less
able to preserve the image style properly, which refers to the color,
brightness, saturation, etc. In this paper, we take advantage of the well-known
frontal/profile optical illusion and present a novel two-stage approach to
solve the aforementioned dilemma, where the task of face pose manipulation is
cast into face inpainting. By selectively sampling pixels from the input face
and slightly adjust their relative locations with the proposed ``Pixel
Attention Sampling" module, the face editing result faithfully keeps the
identity information as well as the image style unchanged. By leveraging
high-dimensional embedding at the inpainting stage, finer details are
generated. Further, with the 3D facial landmarks as guidance, our method is
able to manipulate face pose in three degrees of freedom, i.e., yaw, pitch, and
roll, resulting in more flexible face pose editing than merely controlling the
yaw angle as usually achieved by the current state-of-the-art. Both the
qualitative and quantitative evaluations validate the superiority of the
proposed approach.
Related papers
- SUPER: Selfie Undistortion and Head Pose Editing with Identity Preservation [37.89326064230339]
Super is a novel method of eliminating distortions and adjusting head pose in a close-up face crop.
We perform 3D GAN inversion for a facial image by optimizing camera parameters and face latent code.
We estimate depth from the obtained latent code, create a depth-induced 3D mesh, and render it with updated camera parameters to obtain a warped portrait.
arXiv Detail & Related papers (2024-06-18T15:14:14Z) - End-to-end Face-swapping via Adaptive Latent Representation Learning [12.364688530047786]
This paper proposes a novel and end-to-end integrated framework for high resolution and attribute preservation face swapping.
Our framework integrating facial perceiving and blending into the end-to-end training and testing process can achieve high realistic face-swapping on wild faces.
arXiv Detail & Related papers (2023-03-07T19:16:20Z) - MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation [69.35523133292389]
We propose a framework that a priori models physical attributes of the face explicitly, thus providing disentanglement by design.
Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models.
It achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
arXiv Detail & Related papers (2021-11-01T15:53:36Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - FaceController: Controllable Attribute Editing for Face in the Wild [74.56117807309576]
We propose a simple feed-forward network to generate high-fidelity manipulated faces.
By simply employing some existing and easy-obtainable prior information, our method can control, transfer, and edit diverse attributes of faces in the wild.
In our method, we decouple identity, expression, pose, and illumination using 3D priors; separate texture and colors by using region-wise style codes.
arXiv Detail & Related papers (2021-02-23T02:47:28Z) - PIE: Portrait Image Embedding for Semantic Control [82.69061225574774]
We present the first approach for embedding real portrait images in the latent space of StyleGAN.
We use StyleRig, a pretrained neural network that maps the control space of a 3D morphable face model to the latent space of the GAN.
An identity energy preservation term allows spatially coherent edits while maintaining facial integrity.
arXiv Detail & Related papers (2020-09-20T17:53:51Z) - Reference-guided Face Component Editing [51.29105560090321]
We propose a novel framework termed r-FACE (Reference-guided FAce Component Editing) for diverse and controllable face component editing.
Specifically, r-FACE takes an image inpainting model as the backbone, utilizing reference images as conditions for controlling the shape of face components.
In order to encourage the framework to concentrate on the target face components, an example-guided attention module is designed to fuse attention features and the target face component features extracted from the reference image.
arXiv Detail & Related papers (2020-06-03T05:34:54Z) - Exemplar-based Generative Facial Editing [2.272764591035106]
We propose a novel generative approach for exemplar based facial editing in the form of the region inpainting.
Experimental results demonstrate our method can produce diverse and personalized face editing results.
arXiv Detail & Related papers (2020-05-31T09:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.