3D Cartoon Face Generation with Controllable Expressions from a Single
GAN Image
- URL: http://arxiv.org/abs/2207.14425v1
- Date: Fri, 29 Jul 2022 01:06:21 GMT
- Title: 3D Cartoon Face Generation with Controllable Expressions from a Single
GAN Image
- Authors: Hao Wang, Guosheng Lin, Steven C. H. Hoi, Chunyan Miao
- Abstract summary: We generate 3D cartoon face shapes from single 2D GAN generated human faces.
We manipulate latent codes to generate images with different poses and lighting, such that we can reconstruct the 3D cartoon face shapes.
- Score: 142.047662926209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we investigate an open research task of generating 3D cartoon
face shapes from single 2D GAN generated human faces and without 3D
supervision, where we can also manipulate the facial expressions of the 3D
shapes. To this end, we discover the semantic meanings of StyleGAN latent
space, such that we are able to produce face images of various expressions,
poses, and lighting by controlling the latent codes. Specifically, we first
finetune the pretrained StyleGAN face model on the cartoon datasets. By feeding
the same latent codes to face and cartoon generation models, we aim to realize
the translation from 2D human face images to cartoon styled avatars. We then
discover semantic directions of the GAN latent space, in an attempt to change
the facial expressions while preserving the original identity. As we do not
have any 3D annotations for cartoon faces, we manipulate the latent codes to
generate images with different poses and lighting, such that we can reconstruct
the 3D cartoon face shapes. We validate the efficacy of our method on three
cartoon datasets qualitatively and quantitatively.
Related papers
- DEGAS: Detailed Expressions on Full-Body Gaussian Avatars [13.683836322899953]
We present DEGAS, the first 3D Gaussian Splatting (3DGS)-based modeling method for full-body avatars with rich facial expressions.
We propose to adopt the expression latent space trained solely on 2D portrait images, bridging the gap between 2D talking faces and 3D avatars.
arXiv Detail & Related papers (2024-08-20T06:52:03Z) - AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - Generating Animatable 3D Cartoon Faces from Single Portraits [51.15618892675337]
We present a novel framework to generate animatable 3D cartoon faces from a single portrait image.
We propose a two-stage reconstruction method to recover the 3D cartoon face with detailed texture.
Finally, we propose a semantic preserving face rigging method based on manually created templates and deformation transfer.
arXiv Detail & Related papers (2023-07-04T04:12:50Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars [71.00322191446203]
2D generative models often suffer from undesirable artifacts when rendering images from different camera viewpoints.
Recently, 3D-aware GANs extend 2D GANs for explicit disentanglement of camera pose by leveraging 3D scene representations.
We propose an animatable 3D-aware GAN for multiview consistent face animation generation.
arXiv Detail & Related papers (2022-10-12T17:59:56Z) - Lifting 2D StyleGAN for 3D-Aware Face Generation [52.8152883980813]
We propose a framework, called LiftedGAN, that disentangles and lifts a pre-trained StyleGAN2 for 3D-aware face generation.
Our model is "3D-aware" in the sense that it is able to (1) disentangle the latent space of StyleGAN2 into texture, shape, viewpoint, lighting and (2) generate 3D components for synthetic images.
arXiv Detail & Related papers (2020-11-26T05:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.