LumiGAN: Unconditional Generation of Relightable 3D Human Faces
- URL: http://arxiv.org/abs/2304.13153v1
- Date: Tue, 25 Apr 2023 21:03:20 GMT
- Title: LumiGAN: Unconditional Generation of Relightable 3D Human Faces
- Authors: Boyang Deng, Yifan Wang, Gordon Wetzstein
- Abstract summary: We introduce LumiGAN, an unconditional Geneversarative Adrial Network (GAN) for 3D human faces with a physically based lighting module.
LumiGAN can create realistic shadow effects using an efficient visibility formulation that is learned in a self-supervised manner.
In addition to relightability, we demonstrate significantly improved geometry generation compared to state-of-the-art non-relightable 3D GANs.
- Score: 50.32937196797716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised learning of 3D human faces from unstructured 2D image data is an
active research area. While recent works have achieved an impressive level of
photorealism, they commonly lack control of lighting, which prevents the
generated assets from being deployed in novel environments. To this end, we
introduce LumiGAN, an unconditional Generative Adversarial Network (GAN) for 3D
human faces with a physically based lighting module that enables relighting
under novel illumination at inference time. Unlike prior work, LumiGAN can
create realistic shadow effects using an efficient visibility formulation that
is learned in a self-supervised manner. LumiGAN generates plausible physical
properties for relightable faces, including surface normals, diffuse albedo,
and specular tint without any ground truth data. In addition to relightability,
we demonstrate significantly improved geometry generation compared to
state-of-the-art non-relightable 3D GANs and notably better photorealism than
existing relightable GANs.
Related papers
- UltrAvatar: A Realistic Animatable 3D Avatar Diffusion Model with Authenticity Guided Textures [80.047065473698]
We propose a novel 3D avatar generation approach termed UltrAvatar with enhanced fidelity of geometry, and superior quality of physically based rendering (PBR) textures without unwanted lighting.
We demonstrate the effectiveness and robustness of the proposed method, outperforming the state-of-the-art methods by a large margin in the experiments.
arXiv Detail & Related papers (2024-01-20T01:55:17Z) - FaceLit: Neural 3D Relightable Faces [28.0806453092185]
FaceLit is capable of generating a 3D face that can be rendered at various user-defined lighting conditions and views.
We show state-of-the-art photorealism among 3D aware GANs on FFHQ dataset achieving an FID score of 3.5.
arXiv Detail & Related papers (2023-03-27T17:59:10Z) - Generative Neural Articulated Radiance Fields [104.9224190002448]
We develop a 3D GAN framework that learns to generate radiance fields of human bodies in a canonical pose and warp them using an explicit deformation field into a desired body pose or facial expression.
We show that our deformation-aware training procedure significantly improves the quality of generated bodies or faces when editing their poses or facial expressions.
arXiv Detail & Related papers (2022-06-28T22:49:42Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - MOST-GAN: 3D Morphable StyleGAN for Disentangled Face Image Manipulation [69.35523133292389]
We propose a framework that a priori models physical attributes of the face explicitly, thus providing disentanglement by design.
Our method, MOST-GAN, integrates the expressive power and photorealism of style-based GANs with the physical disentanglement and flexibility of nonlinear 3D morphable models.
It achieves photorealistic manipulation of portrait images with fully disentangled 3D control over their physical attributes, enabling extreme manipulation of lighting, facial expression, and pose variations up to full profile view.
arXiv Detail & Related papers (2021-11-01T15:53:36Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.