CaPhy: Capturing Physical Properties for Animatable Human Avatars
- URL: http://arxiv.org/abs/2308.05925v1
- Date: Fri, 11 Aug 2023 04:01:13 GMT
- Title: CaPhy: Capturing Physical Properties for Animatable Human Avatars
- Authors: Zhaoqi Su and Liangxiao Hu and Siyou Lin and Hongwen Zhang and
Shengping Zhang and Justus Thies and Yebin Liu
- Abstract summary: CaPhy is a novel method for reconstructing animatable human avatars with realistic dynamic properties for clothing.
We aim for capturing the geometric and physical properties of the clothing from real observations.
We combine unsupervised training with physics-based losses and 3D-supervised training using scanned data to reconstruct a dynamic model of clothing.
- Score: 44.95805736197971
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present CaPhy, a novel method for reconstructing animatable human avatars
with realistic dynamic properties for clothing. Specifically, we aim for
capturing the geometric and physical properties of the clothing from real
observations. This allows us to apply novel poses to the human avatar with
physically correct deformations and wrinkles of the clothing. To this end, we
combine unsupervised training with physics-based losses and 3D-supervised
training using scanned data to reconstruct a dynamic model of clothing that is
physically realistic and conforms to the human scans. We also optimize the
physical parameters of the underlying physical model from the scans by
introducing gradient constraints of the physics-based losses. In contrast to
previous work on 3D avatar reconstruction, our method is able to generalize to
novel poses with realistic dynamic cloth deformations. Experiments on several
subjects demonstrate that our method can estimate the physical properties of
the garments, resulting in superior quantitative and qualitative results
compared with previous methods.
Related papers
- PICA: Physics-Integrated Clothed Avatar [30.277983921620663]
We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing.
Our method achieves high-fidelity rendering of human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
arXiv Detail & Related papers (2024-07-07T10:23:21Z) - PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations [62.14943588289551]
We introduce PhysAvatar, a novel framework that combines inverse rendering with inverse physics to automatically estimate the shape and appearance of a human.
PhysAvatar reconstructs avatars dressed in loose-fitting clothes under motions and lighting conditions not seen in the training data.
arXiv Detail & Related papers (2024-04-05T21:44:57Z) - Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing [49.96406805006839]
We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
arXiv Detail & Related papers (2022-06-30T17:58:20Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Deep Physics-aware Inference of Cloth Deformation for Monocular Human
Performance Capture [84.73946704272113]
We show how integrating physics into the training process improves the learned cloth deformations and allows modeling clothing as a separate piece of geometry.
Our approach leads to a significant improvement over current state-of-the-art methods and is thus a clear step towards realistic monocular capture of the entire deforming surface of a human clothed.
arXiv Detail & Related papers (2020-11-25T16:46:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.