Learning Locally Editable Virtual Humans
- URL: http://arxiv.org/abs/2305.00121v1
- Date: Fri, 28 Apr 2023 23:06:17 GMT
- Title: Learning Locally Editable Virtual Humans
- Authors: Hsuan-I Ho, Lixin Xue, Jie Song, Otmar Hilliges
- Abstract summary: We propose a novel hybrid representation and end-to-end trainable network architecture to model fully editable neural avatars.
At the core of our work lies a representation that combines the modeling power of neural fields with the ease of use and inherent 3D consistency of skinned meshes.
Our method generates diverse detailed avatars and achieves better model fitting performance compared to state-of-the-art methods.
- Score: 37.95173373011365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel hybrid representation and end-to-end
trainable network architecture to model fully editable and customizable neural
avatars. At the core of our work lies a representation that combines the
modeling power of neural fields with the ease of use and inherent 3D
consistency of skinned meshes. To this end, we construct a trainable feature
codebook to store local geometry and texture features on the vertices of a
deformable body model, thus exploiting its consistent topology under
articulation. This representation is then employed in a generative auto-decoder
architecture that admits fitting to unseen scans and sampling of realistic
avatars with varied appearances and geometries. Furthermore, our representation
allows local editing by swapping local features between 3D assets. To verify
our method for avatar creation and editing, we contribute a new high-quality
dataset, dubbed CustomHumans, for training and evaluation. Our experiments
quantitatively and qualitatively show that our method generates diverse
detailed avatars and achieves better model fitting performance compared to
state-of-the-art methods. Our code and dataset are available at
https://custom-humans.github.io/.
Related papers
- NECA: Neural Customizable Human Avatar [36.69012172745299]
We introduce NECA, an approach capable of learning versatile human representation from monocular or sparse-view videos.
The core of our approach is to represent humans in complementary dual spaces and predict disentangled neural fields of geometry, albedo, shadow, as well as an external lighting.
arXiv Detail & Related papers (2024-03-15T14:23:06Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - UVA: Towards Unified Volumetric Avatar for View Synthesis, Pose
rendering, Geometry and Texture Editing [83.0396740127043]
We propose a new approach named textbfUnified textbfVolumetric textbfAvatar (textbfUVA) that enables local editing of both geometry and texture.
UVA transforms each observation point to a canonical space using a skinning motion field and represents geometry and texture in separate neural fields.
Experiments on multiple human avatars demonstrate that our UVA achieves novel view synthesis and novel pose rendering.
arXiv Detail & Related papers (2023-04-14T07:39:49Z) - One-shot Implicit Animatable Avatars with Model-based Priors [31.385051428938585]
ELICIT is a novel method for learning human-specific neural radiance fields from a single image.
ELICIT has outperformed strong baseline methods of avatar creation when only a single image is available.
arXiv Detail & Related papers (2022-12-05T18:24:06Z) - 3D Neural Sculpting (3DNS): Editing Neural Signed Distance Functions [34.39282814876276]
In this work, we propose the first method for efficient interactive editing of signed distance functions expressed through neural networks.
Inspired by 3D sculpting software for meshes, we use a brush-based framework that is intuitive and can in the future be used by sculptors and digital artists.
arXiv Detail & Related papers (2022-09-28T10:05:16Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - Learning Compositional Radiance Fields of Dynamic Human Heads [13.272666180264485]
We propose a novel compositional 3D representation that combines the best of previous methods to produce both higher-resolution and faster results.
Differentiable volume rendering is employed to compute photo-realistic novel views of the human head and upper body.
Our approach achieves state-of-the-art results for synthesizing novel views of dynamic human heads and the upper body.
arXiv Detail & Related papers (2020-12-17T22:19:27Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.