PointAvatar: Deformable Point-based Head Avatars from Videos
- URL: http://arxiv.org/abs/2212.08377v1
- Date: Fri, 16 Dec 2022 10:05:31 GMT
- Title: PointAvatar: Deformable Point-based Head Avatars from Videos
- Authors: Yufeng Zheng, Wang Yifan, Gordon Wetzstein, Michael J. Black, Otmar
Hilliges
- Abstract summary: PointAvatar is a deformable point-based representation that disentangles the source color into intrinsic albedo and normal-dependent shading.
We show that our method is able to generate animatable 3D avatars using monocular videos from multiple sources.
- Score: 103.43941945044294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to create realistic, animatable and relightable head avatars from
casual video sequences would open up wide ranging applications in communication
and entertainment. Current methods either build on explicit 3D morphable meshes
(3DMM) or exploit neural implicit representations. The former are limited by
fixed topology, while the latter are non-trivial to deform and inefficient to
render. Furthermore, existing approaches entangle lighting in the color
estimation, thus they are limited in re-rendering the avatar in new
environments. In contrast, we propose PointAvatar, a deformable point-based
representation that disentangles the source color into intrinsic albedo and
normal-dependent shading. We demonstrate that PointAvatar bridges the gap
between existing mesh- and implicit representations, combining high-quality
geometry and appearance with topological flexibility, ease of deformation and
rendering efficiency. We show that our method is able to generate animatable 3D
avatars using monocular videos from multiple sources including hand-held
smartphones, laptop webcams and internet videos, achieving state-of-the-art
quality in challenging cases where previous methods fail, e.g., thin hair
strands, while being significantly more efficient in training than competing
methods.
Related papers
- MeshAvatar: Learning High-quality Triangular Human Avatars from Multi-view Videos [41.45299653187577]
We present a novel pipeline for learning high-quality triangular human avatars from multi-view videos.
Our method represents the avatar with an explicit triangular mesh extracted from an implicit SDF field.
We incorporate physics-based rendering to accurately decompose geometry and texture.
arXiv Detail & Related papers (2024-07-11T11:37:51Z) - HR Human: Modeling Human Avatars with Triangular Mesh and High-Resolution Textures from Videos [52.23323966700072]
We present a framework for acquiring human avatars that are attached with high-resolution physically-based material textures and mesh from monocular video.
Our method introduces a novel information fusion strategy to combine the information from the monocular video and synthesize virtual multi-view images.
Experiments show that our approach outperforms previous representations in terms of high fidelity, and this explicit result supports deployment on common triangulars.
arXiv Detail & Related papers (2024-05-18T11:49:09Z) - PSAvatar: A Point-based Shape Model for Real-Time Head Avatar Animation with 3D Gaussian Splatting [17.78639236586134]
PSAvatar is a novel framework for animatable head avatar creation.
It employs 3D Gaussian for fine detail representation and high fidelity rendering.
We show that PSAvatar can reconstruct high-fidelity head avatars of a variety of subjects and the avatars can be animated in real-time.
arXiv Detail & Related papers (2024-01-23T16:40:47Z) - UltrAvatar: A Realistic Animatable 3D Avatar Diffusion Model with Authenticity Guided Textures [80.047065473698]
We propose a novel 3D avatar generation approach termed UltrAvatar with enhanced fidelity of geometry, and superior quality of physically based rendering (PBR) textures without unwanted lighting.
We demonstrate the effectiveness and robustness of the proposed method, outperforming the state-of-the-art methods by a large margin in the experiments.
arXiv Detail & Related papers (2024-01-20T01:55:17Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.