Real-time Deep Dynamic Characters
- URL: http://arxiv.org/abs/2105.01794v1
- Date: Tue, 4 May 2021 23:28:55 GMT
- Title: Real-time Deep Dynamic Characters
- Authors: Marc Habermann, Lingjie Liu, Weipeng Xu, Michael Zollhoefer, Gerard
Pons-Moll, Christian Theobalt
- Abstract summary: We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
- Score: 95.5592405831368
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a deep videorealistic 3D human character model displaying highly
realistic shape, motion, and dynamic appearance learned in a new weakly
supervised way from multi-view imagery. In contrast to previous work, our
controllable 3D character displays dynamics, e.g., the swing of the skirt,
dependent on skeletal body motion in an efficient data-driven way, without
requiring complex physics simulation. Our character model also features a
learned dynamic texture model that accounts for photo-realistic
motion-dependent appearance details, as well as view-dependent lighting
effects. During training, we do not need to resort to difficult dynamic 3D
capture of the human; instead we can train our model entirely from multi-view
video in a weakly supervised manner. To this end, we propose a parametric and
differentiable character representation which allows us to model coarse and
fine dynamic deformations, e.g., garment wrinkles, as explicit space-time
coherent mesh geometry that is augmented with high-quality dynamic textures
dependent on motion and view point. As input to the model, only an arbitrary 3D
skeleton motion is required, making it directly compatible with the established
3D animation pipeline. We use a novel graph convolutional network architecture
to enable motion-dependent deformation learning of body and clothing, including
dynamics, and a neural generative dynamic texture model creates corresponding
dynamic texture maps. We show that by merely providing new skeletal motions,
our model creates motion-dependent surface deformations, physically plausible
dynamic clothing deformations, as well as video-realistic surface textures at a
much higher level of detail than previous state of the art approaches, and even
in real-time.
Related papers
- SurMo: Surface-based 4D Motion Modeling for Dynamic Human Rendering [45.51684124904457]
We propose a new 4D motion paradigm, SurMo, that models the temporal dynamics and human appearances in a unified framework.
Surface-based motion encoding that models 4D human motions with an efficient compact surface-based triplane.
Physical motion decoding that is designed to encourage physical motion learning.
4D appearance modeling that renders the motion triplanes into images by an efficient surface-conditioned decoding.
arXiv Detail & Related papers (2024-04-01T16:34:27Z) - Dynamic Appearance Modeling of Clothed 3D Human Avatars using a Single
Camera [8.308263758475938]
We introduce a method for high-quality modeling of clothed 3D human avatars using a video of a person with dynamic movements.
For explicit modeling, a neural network learns to generate point-wise shape residuals and appearance features of a 3D body model.
For implicit modeling, an implicit network combines the appearance and 3D motion features to decode high-fidelity clothed 3D human avatars.
arXiv Detail & Related papers (2023-12-28T06:04:39Z) - GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians [51.46168990249278]
We present an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.
GustafAvatar is validated on both the public dataset and our collected dataset.
arXiv Detail & Related papers (2023-12-04T18:55:45Z) - HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and
Dynamic Details [66.74088288846491]
HiFace aims at high-fidelity 3D face reconstruction with dynamic and static details.
We exploit several loss functions to jointly learn the coarse shape and fine details with both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-03-20T16:07:02Z) - Learning Motion-Dependent Appearance for High-Fidelity Rendering of
Dynamic Humans from a Single Camera [49.357174195542854]
A key challenge of learning the dynamics of the appearance lies in the requirement of a prohibitively large amount of observations.
We show that our method can generate a temporally coherent video of dynamic humans for unseen body poses and novel views given a single view video.
arXiv Detail & Related papers (2022-03-24T00:22:03Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - Dynamic Neural Garments [45.833166320896716]
We present a solution that takes in body joint motion to directly produce realistic dynamic garment image sequences.
Specifically, given the target joint motion sequence of an avatar, we propose dynamic neural garments to jointly simulate and render plausible dynamic garment appearance.
arXiv Detail & Related papers (2021-02-23T17:21:21Z) - Contact and Human Dynamics from Monocular Video [73.47466545178396]
Existing deep models predict 2D and 3D kinematic poses from video that are approximately accurate, but contain visible errors.
We present a physics-based method for inferring 3D human motion from video sequences that takes initial 2D and 3D pose estimates as input.
arXiv Detail & Related papers (2020-07-22T21:09:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.