Animatable Virtual Humans: Learning pose-dependent human representations
in UV space for interactive performance synthesis
- URL: http://arxiv.org/abs/2310.03615v1
- Date: Thu, 5 Oct 2023 15:49:44 GMT
- Title: Animatable Virtual Humans: Learning pose-dependent human representations
in UV space for interactive performance synthesis
- Authors: Wieland Morgenstern, Milena T. Bagdasarian, Anna Hilsmann, Peter
Eisert
- Abstract summary: We learn pose dependent appearance and geometry from highly accurate dynamic mesh sequences.
We encode both pose-dependent appearance and geometry in the consistent UV space of the SMPL model.
- Score: 11.604386285817302
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a novel representation of virtual humans for highly realistic
real-time animation and rendering in 3D applications. We learn pose dependent
appearance and geometry from highly accurate dynamic mesh sequences obtained
from state-of-the-art multiview-video reconstruction. Learning pose-dependent
appearance and geometry from mesh sequences poses significant challenges, as it
requires the network to learn the intricate shape and articulated motion of a
human body. However, statistical body models like SMPL provide valuable
a-priori knowledge which we leverage in order to constrain the dimension of the
search space enabling more efficient and targeted learning and define
pose-dependency. Instead of directly learning absolute pose-dependent geometry,
we learn the difference between the observed geometry and the fitted SMPL
model. This allows us to encode both pose-dependent appearance and geometry in
the consistent UV space of the SMPL model. This approach not only ensures a
high level of realism but also facilitates streamlined processing and rendering
of virtual humans in real-time scenarios.
Related papers
- PGAHum: Prior-Guided Geometry and Appearance Learning for High-Fidelity Animatable Human Reconstruction [9.231326291897817]
We introduce PGAHum, a prior-guided geometry and appearance learning framework for high-fidelity animatable human reconstruction.
We thoroughly exploit 3D human priors in three key modules of PGAHum to achieve high-quality geometry reconstruction with intricate details and photorealistic view synthesis on unseen poses.
arXiv Detail & Related papers (2024-04-22T04:22:30Z) - TriHuman : A Real-time and Controllable Tri-plane Representation for
Detailed Human Geometry and Appearance Synthesis [76.73338151115253]
TriHuman is a novel human-tailored, deformable, and efficient tri-plane representation.
We non-rigidly warp global ray samples into our undeformed tri-plane texture space.
We show how such a tri-plane feature representation can be conditioned on the skeletal motion to account for dynamic appearance and geometry changes.
arXiv Detail & Related papers (2023-12-08T16:40:38Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Learning Motion-Dependent Appearance for High-Fidelity Rendering of
Dynamic Humans from a Single Camera [49.357174195542854]
A key challenge of learning the dynamics of the appearance lies in the requirement of a prohibitively large amount of observations.
We show that our method can generate a temporally coherent video of dynamic humans for unseen body poses and novel views given a single view video.
arXiv Detail & Related papers (2022-03-24T00:22:03Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.