PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations
- URL: http://arxiv.org/abs/2404.04421v2
- Date: Tue, 9 Apr 2024 06:23:35 GMT
- Title: PhysAvatar: Learning the Physics of Dressed 3D Avatars from Visual Observations
- Authors: Yang Zheng, Qingqing Zhao, Guandao Yang, Wang Yifan, Donglai Xiang, Florian Dubost, Dmitry Lagun, Thabo Beeler, Federico Tombari, Leonidas Guibas, Gordon Wetzstein,
- Abstract summary: We introduce PhysAvatar, a novel framework that combines inverse rendering with inverse physics to automatically estimate the shape and appearance of a human.
PhysAvatar reconstructs avatars dressed in loose-fitting clothes under motions and lighting conditions not seen in the training data.
- Score: 62.14943588289551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modeling and rendering photorealistic avatars is of crucial importance in many applications. Existing methods that build a 3D avatar from visual observations, however, struggle to reconstruct clothed humans. We introduce PhysAvatar, a novel framework that combines inverse rendering with inverse physics to automatically estimate the shape and appearance of a human from multi-view video data along with the physical parameters of the fabric of their clothes. For this purpose, we adopt a mesh-aligned 4D Gaussian technique for spatio-temporal mesh tracking as well as a physically based inverse renderer to estimate the intrinsic material properties. PhysAvatar integrates a physics simulator to estimate the physical parameters of the garments using gradient-based optimization in a principled manner. These novel capabilities enable PhysAvatar to create high-quality novel-view renderings of avatars dressed in loose-fitting clothes under motions and lighting conditions not seen in the training data. This marks a significant advancement towards modeling photorealistic digital humans using physically based inverse rendering with physics in the loop. Our project website is at: https://qingqing-zhao.github.io/PhysAvatar
Related papers
- SimAvatar: Simulation-Ready Avatars with Layered Hair and Clothing [59.44721317364197]
We introduce SimAvatar, a framework designed to generate simulation-ready clothed 3D human avatars from a text prompt.
Our method is the first to produce highly realistic, fully simulation-ready 3D avatars, surpassing the capabilities of current approaches.
arXiv Detail & Related papers (2024-12-12T18:35:26Z) - PBDyG: Position Based Dynamic Gaussians for Motion-Aware Clothed Human Avatars [18.101742122988707]
This paper introduces a novel clothed human model that can be learned from multiview RGB videos.
Our method realizes movement-dependent'' cloth deformation via physical simulation.
Experiments demonstrate that our method not only accurately reproduces appearance but also enables the reconstruction of avatars wearing highly deformable garments.
arXiv Detail & Related papers (2024-12-05T18:53:06Z) - PICA: Physics-Integrated Clothed Avatar [30.277983921620663]
We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing.
Our method achieves high-fidelity rendering of human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
arXiv Detail & Related papers (2024-07-07T10:23:21Z) - GaussianAvatar: Towards Realistic Human Avatar Modeling from a Single Video via Animatable 3D Gaussians [51.46168990249278]
We present an efficient approach to creating realistic human avatars with dynamic 3D appearances from a single video.
GustafAvatar is validated on both the public dataset and our collected dataset.
arXiv Detail & Related papers (2023-12-04T18:55:45Z) - CaPhy: Capturing Physical Properties for Animatable Human Avatars [44.95805736197971]
CaPhy is a novel method for reconstructing animatable human avatars with realistic dynamic properties for clothing.
We aim for capturing the geometric and physical properties of the clothing from real observations.
We combine unsupervised training with physics-based losses and 3D-supervised training using scanned data to reconstruct a dynamic model of clothing.
arXiv Detail & Related papers (2023-08-11T04:01:13Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z) - Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing [49.96406805006839]
We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
arXiv Detail & Related papers (2022-06-30T17:58:20Z) - PhysXNet: A Customizable Approach for LearningCloth Dynamics on Dressed
People [38.23532960427364]
We introduce PhysXNet, a learning-based approach to predict the dynamics of deformable clothes given 3D skeleton motion sequences of humans wearing these clothes.
PhysXNet is able to estimate the geometry of dense cloth meshes in a matter of milliseconds.
A thorough evaluation demonstrates that PhysXNet delivers cloth deformations very close to those computed with the physical engine.
arXiv Detail & Related papers (2021-11-13T21:05:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.