AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars
- URL: http://arxiv.org/abs/2210.06465v1
- Date: Wed, 12 Oct 2022 17:59:56 GMT
- Title: AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars
- Authors: Yue Wu, Yu Deng, Jiaolong Yang, Fangyun Wei, Qifeng Chen, Xin Tong
- Abstract summary: 2D generative models often suffer from undesirable artifacts when rendering images from different camera viewpoints.
Recently, 3D-aware GANs extend 2D GANs for explicit disentanglement of camera pose by leveraging 3D scene representations.
We propose an animatable 3D-aware GAN for multiview consistent face animation generation.
- Score: 71.00322191446203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although 2D generative models have made great progress in face image
generation and animation, they often suffer from undesirable artifacts such as
3D inconsistency when rendering images from different camera viewpoints. This
prevents them from synthesizing video animations indistinguishable from real
ones. Recently, 3D-aware GANs extend 2D GANs for explicit disentanglement of
camera pose by leveraging 3D scene representations. These methods can well
preserve the 3D consistency of the generated images across different views, yet
they cannot achieve fine-grained control over other attributes, among which
facial expression control is arguably the most useful and desirable for face
animation. In this paper, we propose an animatable 3D-aware GAN for multiview
consistent face animation generation. The key idea is to decompose the 3D
representation of the 3D-aware GAN into a template field and a deformation
field, where the former represents different identities with a canonical
expression, and the latter characterizes expression variations of each
identity. To achieve meaningful control over facial expressions via
deformation, we propose a 3D-level imitative learning scheme between the
generator and a parametric 3D face model during adversarial training of the
3D-aware GAN. This helps our method achieve high-quality animatable face image
generation with strong visual 3D consistency, even though trained with only
unstructured 2D images. Extensive experiments demonstrate our superior
performance over prior works. Project page:
https://yuewuhkust.github.io/AniFaceGAN
Related papers
- Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior [57.986512832738704]
We present a new framework Sculpt3D that equips the current pipeline with explicit injection of 3D priors from retrieved reference objects without re-training the 2D diffusion model.
Specifically, we demonstrate that high-quality and diverse 3D geometry can be guaranteed by keypoints supervision through a sparse ray sampling approach.
These two decoupled designs effectively harness 3D information from reference objects to generate 3D objects while preserving the generation quality of the 2D diffusion model.
arXiv Detail & Related papers (2024-03-14T07:39:59Z) - AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - Controllable 3D Generative Adversarial Face Model via Disentangling
Shape and Appearance [63.13801759915835]
3D face modeling has been an active area of research in computer vision and computer graphics.
This paper proposes a new 3D face generative model that can decouple identity and expression.
arXiv Detail & Related papers (2022-08-30T13:40:48Z) - Controllable 3D Face Synthesis with Conditional Generative Occupancy
Fields [40.2714783162419]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF) that effectively enforces the shape of the generated face to commit to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method, which is able to generate high-fidelity face images.
arXiv Detail & Related papers (2022-06-16T17:58:42Z) - 3D-Aware Semantic-Guided Generative Model for Human Synthesis [67.86621343494998]
This paper proposes a 3D-aware Semantic-Guided Generative Model (3D-SGAN) for human image synthesis.
Our experiments on the DeepFashion dataset show that 3D-SGAN significantly outperforms the most recent baselines.
arXiv Detail & Related papers (2021-12-02T17:10:53Z) - Lifting 2D StyleGAN for 3D-Aware Face Generation [52.8152883980813]
We propose a framework, called LiftedGAN, that disentangles and lifts a pre-trained StyleGAN2 for 3D-aware face generation.
Our model is "3D-aware" in the sense that it is able to (1) disentangle the latent space of StyleGAN2 into texture, shape, viewpoint, lighting and (2) generate 3D components for synthetic images.
arXiv Detail & Related papers (2020-11-26T05:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.