Learning Neural Parametric Head Models
- URL: http://arxiv.org/abs/2212.02761v2
- Date: Fri, 14 Apr 2023 17:47:42 GMT
- Title: Learning Neural Parametric Head Models
- Authors: Simon Giebenhain, Tobias Kirschstein, Markos Georgopoulos, Martin
R\"unz, Lourdes Agapito, Matthias Nie{\ss}ner
- Abstract summary: We propose a novel 3D morphable model for complete human heads based on hybrid neural fields.
We capture a person's identity in a canonical space as a signed distance field (SDF), and model facial expressions with a neural deformation field.
Our representation achieves high-fidelity local detail by introducing an ensemble of local fields centered around facial anchor points.
- Score: 7.679586286000453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel 3D morphable model for complete human heads based on
hybrid neural fields. At the core of our model lies a neural parametric
representation that disentangles identity and expressions in disjoint latent
spaces. To this end, we capture a person's identity in a canonical space as a
signed distance field (SDF), and model facial expressions with a neural
deformation field. In addition, our representation achieves high-fidelity local
detail by introducing an ensemble of local fields centered around facial anchor
points. To facilitate generalization, we train our model on a newly-captured
dataset of over 5200 head scans from 255 different identities using a custom
high-end 3D scanning setup. Our dataset significantly exceeds comparable
existing datasets, both with respect to quality and completeness of geometry,
averaging around 3.5M mesh faces per scan. Finally, we demonstrate that our
approach outperforms state-of-the-art methods in terms of fitting error and
reconstruction quality.
Related papers
- GeoGen: Geometry-Aware Generative Modeling via Signed Distance Functions [22.077366472693395]
We introduce a new generative approach for synthesizing 3D geometry and images from single-view collections.
By employing volumetric rendering using neural radiance fields, they inherit a key limitation: the generated geometry is noisy and unconstrained.
We propose GeoGen, a new SDF-based 3D generative model trained in an end-to-end manner.
arXiv Detail & Related papers (2024-06-06T17:00:10Z) - ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - Pushing Auto-regressive Models for 3D Shape Generation at Capacity and Scalability [118.26563926533517]
Auto-regressive models have achieved impressive results in 2D image generation by modeling joint distributions in grid space.
We extend auto-regressive models to 3D domains, and seek a stronger ability of 3D shape generation by improving auto-regressive models at capacity and scalability simultaneously.
arXiv Detail & Related papers (2024-02-19T15:33:09Z) - Deformable Model-Driven Neural Rendering for High-Fidelity 3D
Reconstruction of Human Heads Under Low-View Settings [20.07788905506271]
Reconstructing 3D human heads in low-view settings presents technical challenges.
We propose geometry decomposition and adopt a two-stage, coarse-to-fine training strategy.
Our method outperforms existing neural rendering approaches in terms of reconstruction accuracy and novel view synthesis under low-view settings.
arXiv Detail & Related papers (2023-03-24T08:32:00Z) - Neural Poisson: Indicator Functions for Neural Fields [25.41908065938424]
Implicit neural field generating signed distance field representations (SDFs) of 3D shapes have shown remarkable progress.
We introduce a new paradigm for neural field representations of 3D scenes.
We show that our approach demonstrates state-of-the-art reconstruction performance on both synthetic and real scanned 3D scene data.
arXiv Detail & Related papers (2022-11-25T17:28:22Z) - Neural Capture of Animatable 3D Human from Monocular Video [38.974181971541846]
We present a novel paradigm of building an animatable 3D human representation from a monocular video input, such that it can be rendered in any unseen poses and views.
Our method is based on a dynamic Neural Radiance Field (NeRF) rigged by a mesh-based parametric 3D human model serving as a geometry proxy.
arXiv Detail & Related papers (2022-08-18T09:20:48Z) - Learned Vertex Descent: A New Direction for 3D Human Model Fitting [64.04726230507258]
We propose a novel optimization-based paradigm for 3D human model fitting on images and scans.
Our approach is able to capture the underlying body of clothed people with very different body shapes, achieving a significant improvement compared to state-of-the-art.
LVD is also applicable to 3D model fitting of humans and hands, for which we show a significant improvement to the SOTA with a much simpler and faster method.
arXiv Detail & Related papers (2022-05-12T17:55:51Z) - Surface-Aligned Neural Radiance Fields for Controllable 3D Human
Synthesis [4.597864989500202]
We propose a new method for reconstructing implicit 3D human models from sparse multi-view RGB videos.
Our method defines the neural scene representation on the mesh surface points and signed distances from the surface of a human body mesh.
arXiv Detail & Related papers (2022-01-05T16:25:32Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Shape My Face: Registering 3D Face Scans by Surface-to-Surface
Translation [75.59415852802958]
Shape-My-Face (SMF) is a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model.
Our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets.
arXiv Detail & Related papers (2020-12-16T20:02:36Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.