Towards Practical Capture of High-Fidelity Relightable Avatars
- URL: http://arxiv.org/abs/2309.04247v1
- Date: Fri, 8 Sep 2023 10:26:29 GMT
- Title: Towards Practical Capture of High-Fidelity Relightable Avatars
- Authors: Haotian Yang, Mingwu Zheng, Wanquan Feng, Haibin Huang, Yu-Kun Lai,
Pengfei Wan, Zhongyuan Wang, Chongyang Ma
- Abstract summary: TRAvatar is trained with dynamic image sequences captured in a Light Stage under varying lighting conditions.
It can predict the appearance in real-time with a single forward pass, achieving high-quality relighting effects.
Our framework achieves superior performance for photorealistic avatar animation and relighting.
- Score: 60.25823986199208
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel framework, Tracking-free Relightable Avatar
(TRAvatar), for capturing and reconstructing high-fidelity 3D avatars. Compared
to previous methods, TRAvatar works in a more practical and efficient setting.
Specifically, TRAvatar is trained with dynamic image sequences captured in a
Light Stage under varying lighting conditions, enabling realistic relighting
and real-time animation for avatars in diverse scenes. Additionally, TRAvatar
allows for tracking-free avatar capture and obviates the need for accurate
surface tracking under varying illumination conditions. Our contributions are
two-fold: First, we propose a novel network architecture that explicitly builds
on and ensures the satisfaction of the linear nature of lighting. Trained on
simple group light captures, TRAvatar can predict the appearance in real-time
with a single forward pass, achieving high-quality relighting effects under
illuminations of arbitrary environment maps. Second, we jointly optimize the
facial geometry and relightable appearance from scratch based on image
sequences, where the tracking is implicitly learned. This tracking-free
approach brings robustness for establishing temporal correspondences between
frames under different lighting conditions. Extensive qualitative and
quantitative experiments demonstrate that our framework achieves superior
performance for photorealistic avatar animation and relighting.
Related papers
- URAvatar: Universal Relightable Gaussian Codec Avatars [42.25313535192927]
We present a new approach to creating photorealistic and relightable head avatars from a phone scan with unknown illumination.
The reconstructed avatars can be animated and relit in real time with the global illumination of diverse environments.
arXiv Detail & Related papers (2024-10-31T17:59:56Z) - Surfel-based Gaussian Inverse Rendering for Fast and Relightable Dynamic Human Reconstruction from Monocular Video [41.677560631206184]
This paper introduces the Surfel-based Gaussian Inverse Avatar (SGIA) method, which introduces efficient training and rendering for relightable dynamic human reconstruction.
SGIA advances previous Gaussian Avatar methods by comprehensively modeling Physically-Based Rendering (PBR) properties for clothed human avatars.
Our approach integrates pre-integration and image-based lighting for fast light calculations that surpass the performance of existing implicit-based techniques.
arXiv Detail & Related papers (2024-07-21T16:34:03Z) - Interactive Rendering of Relightable and Animatable Gaussian Avatars [37.73483372890271]
We propose a simple and efficient method to decouple body materials and lighting from multi-view or monocular avatar videos.
Our method can render higher quality results at a faster speed on both synthetic and real datasets.
arXiv Detail & Related papers (2024-07-15T13:25:07Z) - Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - Relightable Gaussian Codec Avatars [26.255161061306428]
We present Relightable Gaussian Codec Avatars, a method to build high-fidelity relightable head avatars that can be animated to generate novel expressions.
Our geometry model based on 3D Gaussians can capture 3D-consistent sub-millimeter details such as hair strands and pores on dynamic face sequences.
We improve the fidelity of eye reflections and enable explicit gaze control by introducing relightable explicit eye models.
arXiv Detail & Related papers (2023-12-06T18:59:58Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.