Controllable Face Manipulation and UV Map Generation by Self-supervised
Learning
- URL: http://arxiv.org/abs/2209.12050v1
- Date: Sat, 24 Sep 2022 16:49:25 GMT
- Title: Controllable Face Manipulation and UV Map Generation by Self-supervised
Learning
- Authors: Yuanming Li, Jeong-gi Kwak, David Han, Hanseok Ko
- Abstract summary: Recent methods achieve explicit control over 2D images by combining 2D generative model and 3DMM.
Due to the lack of realism and clarity in texture reconstruction by 3DMM, there is a domain gap between the synthetic image and the rendered image of 3DMM.
In this study, we propose to explicitly edit the latent space of the pretrained StyleGAN by controlling the parameters of the 3DMM.
- Score: 20.10160338724354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although manipulating facial attributes by Generative Adversarial Networks
(GANs) has been remarkably successful recently, there are still some challenges
in explicit control of features such as pose, expression, lighting, etc. Recent
methods achieve explicit control over 2D images by combining 2D generative
model and 3DMM. However, due to the lack of realism and clarity in texture
reconstruction by 3DMM, there is a domain gap between the synthetic image and
the rendered image of 3DMM. Since rendered 3DMM images contain facial region
only without the background, directly computing the loss between these two
domains is not ideal and the resultant trained model will be biased. In this
study, we propose to explicitly edit the latent space of the pretrained
StyleGAN by controlling the parameters of the 3DMM. To address the domain gap
problem, we propose a noval network called 'Map and edit' and a simple but
effective attribute editing method to avoid direct loss computation between
rendered and synthesized images. Furthermore, since our model can accurately
generate multi-view face images while the identity remains unchanged. As a
by-product, combined with visibility masks, our proposed model can also
generate texture-rich and high-resolution UV facial textures. Our model relies
on pretrained StyleGAN, and the proposed model is trained in a self-supervised
manner without any manual annotations or datasets.
Related papers
- IFaceUV: Intuitive Motion Facial Image Generation by Identity
Preservation via UV map [5.397942823754509]
IFaceUV is a pipeline that properly combines 2D and 3D information to conduct the facial reenactment task.
The three-dimensional morphable face models (3DMMs) and corresponding UV maps are utilized to intuitively control facial motions and textures.
In our pipeline, we first extract 3DMM parameters and corresponding UV maps from source and target images.
In parallel, we warp the source image according to the 2D flow field obtained from the 2D warping network.
arXiv Detail & Related papers (2023-06-08T06:15:13Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - CGOF++: Controllable 3D Face Synthesis with Conditional Generative
Occupancy Fields [52.14985242487535]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method and show more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
arXiv Detail & Related papers (2022-11-23T19:02:50Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - NeuralReshaper: Single-image Human-body Retouching with Deep Neural
Networks [50.40798258968408]
We present NeuralReshaper, a novel method for semantic reshaping of human bodies in single images using deep generative networks.
Our approach follows a fit-then-reshape pipeline, which first fits a parametric 3D human model to a source human image.
To deal with the lack-of-data problem that no paired data exist, we introduce a novel self-supervised strategy to train our network.
arXiv Detail & Related papers (2022-03-20T09:02:13Z) - MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation [69.35523133292389]
We propose a framework that a priori models physical attributes of the face explicitly, thus providing disentanglement by design.
Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models.
It achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
arXiv Detail & Related papers (2021-11-01T15:53:36Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Cross-Domain and Disentangled Face Manipulation with 3D Guidance [33.43993665841577]
We propose the first method to manipulate faces in arbitrary domains using human 3DMM.
This is achieved through two major steps: 1) disentangled mapping from 3DMM parameters to the latent space embedding of a pre-trained StyleGAN2.
Experiments and comparisons demonstrate the superiority of our high-quality semantic manipulation method on a variety of face domains.
arXiv Detail & Related papers (2021-04-22T17:59:50Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Convolutional Generation of Textured 3D Meshes [34.20939983046376]
We propose a framework that can generate triangle meshes and associated high-resolution texture maps, using only 2D supervision from single-view natural images.
A key contribution of our work is the encoding of the mesh and texture as 2D representations, which are semantically aligned and can be easily modeled by a 2D convolutional GAN.
We demonstrate the efficacy of our method on Pascal3D+ Cars and CUB, both in an unconditional setting and in settings where the model is conditioned on class labels, attributes, and text.
arXiv Detail & Related papers (2020-06-13T15:23:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.