Dynamic Appearance Modeling of Clothed 3D Human Avatars using a Single
Camera
- URL: http://arxiv.org/abs/2312.16842v1
- Date: Thu, 28 Dec 2023 06:04:39 GMT
- Title: Dynamic Appearance Modeling of Clothed 3D Human Avatars using a Single
Camera
- Authors: Hansol Lee, Junuk Cha, Yunhoe Ku, Jae Shin Yoon and Seungryul Baek
- Abstract summary: We introduce a method for high-quality modeling of clothed 3D human avatars using a video of a person with dynamic movements.
For explicit modeling, a neural network learns to generate point-wise shape residuals and appearance features of a 3D body model.
For implicit modeling, an implicit network combines the appearance and 3D motion features to decode high-fidelity clothed 3D human avatars.
- Score: 8.308263758475938
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The appearance of a human in clothing is driven not only by the pose but also
by its temporal context, i.e., motion. However, such context has been largely
neglected by existing monocular human modeling methods whose neural networks
often struggle to learn a video of a person with large dynamics due to the
motion ambiguity, i.e., there exist numerous geometric configurations of
clothes that are dependent on the context of motion even for the same pose. In
this paper, we introduce a method for high-quality modeling of clothed 3D human
avatars using a video of a person with dynamic movements. The main challenge
comes from the lack of 3D ground truth data of geometry and its temporal
correspondences. We address this challenge by introducing a novel compositional
human modeling framework that takes advantage of both explicit and implicit
human modeling. For explicit modeling, a neural network learns to generate
point-wise shape residuals and appearance features of a 3D body model by
comparing its 2D rendering results and the original images. This explicit model
allows for the reconstruction of discriminative 3D motion features from UV
space by encoding their temporal correspondences. For implicit modeling, an
implicit network combines the appearance and 3D motion features to decode
high-fidelity clothed 3D human avatars with motion-dependent geometry and
texture. The experiments show that our method can generate a large variation of
secondary motion in a physically plausible way.
Related papers
- A Survey on 3D Human Avatar Modeling -- From Reconstruction to Generation [20.32107267981782]
3D human modeling, lying at the core of many real-world applications, has attracted significant attention.
This survey aims to provide a comprehensive overview of emerging techniques for 3D human avatar modeling.
arXiv Detail & Related papers (2024-06-06T16:58:00Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - AvatarGen: A 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is an unsupervised generation of 3D-aware clothed humans with various appearances and controllable geometries.
Our method can generate animatable 3D human avatars with high-quality appearance and geometry modeling.
It is competent for many applications, e.g., single-view reconstruction, re-animation, and text-guided synthesis/editing.
arXiv Detail & Related papers (2022-11-26T15:15:45Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z) - Learning Motion-Dependent Appearance for High-Fidelity Rendering of
Dynamic Humans from a Single Camera [49.357174195542854]
A key challenge of learning the dynamics of the appearance lies in the requirement of a prohibitively large amount of observations.
We show that our method can generate a temporally coherent video of dynamic humans for unseen body poses and novel views given a single view video.
arXiv Detail & Related papers (2022-03-24T00:22:03Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.