AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model
- URL: http://arxiv.org/abs/2401.15348v1
- Date: Sat, 27 Jan 2024 08:48:18 GMT
- Title: AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model
- Authors: Beijia Chen, Yuefan Shen, Qing Shuai, Xiaowei Zhou, Kun Zhou, Youyi
Zheng
- Abstract summary: We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
- Score: 58.035758145894846
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent communities have seen significant progress in building photo-realistic
animatable avatars from sparse multi-view videos. However, current workflows
struggle to render realistic garment dynamics for loose-fitting characters as
they predominantly rely on naked body models for human modeling while leaving
the garment part un-modeled. This is mainly due to that the deformations
yielded by loose garments are highly non-rigid, and capturing such deformations
often requires dense views as supervision. In this paper, we introduce
AniDress, a novel method for generating animatable human avatars in loose
clothes using very sparse multi-view videos (4-8 in our setting). To allow the
capturing and appearance learning of loose garments in such a situation, we
employ a virtual bone-based garment rigging model obtained from physics-based
simulation data. Such a model allows us to capture and render complex garment
dynamics through a set of low-dimensional bone transformations. Technically, we
develop a novel method for estimating temporal coherent garment dynamics from a
sparse multi-view video. To build a realistic rendering for unseen garment
status using coarse estimations, a pose-driven deformable neural radiance field
conditioned on both body and garment motions is introduced, providing explicit
control of both parts. At test time, the new garment poses can be captured from
unseen situations, derived from a physics-based or neural network-based
simulator to drive unseen garment dynamics. To evaluate our approach, we create
a multi-view dataset that captures loose-dressed performers with diverse
motions. Experiments show that our method is able to render natural garment
dynamics that deviate highly from the body and generalize well to both unseen
views and poses, surpassing the performance of existing methods. The code and
data will be publicly available.
Related papers
- Garment Animation NeRF with Color Editing [6.357662418254495]
We propose a novel approach to synthesize garment animations from body motion sequences without the need for an explicit garment proxy.
Our approach infers garment dynamic features from body motion, providing a preliminary overview of garment structure.
We demonstrate the generalizability of our method across unseen body motions and camera views, ensuring detailed structural consistency.
arXiv Detail & Related papers (2024-07-29T08:17:05Z) - PICA: Physics-Integrated Clothed Avatar [30.277983921620663]
We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing.
Our method achieves high-fidelity rendering of human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
arXiv Detail & Related papers (2024-07-07T10:23:21Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing [49.96406805006839]
We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
arXiv Detail & Related papers (2022-06-30T17:58:20Z) - Garment Avatars: Realistic Cloth Driving using Pattern Registration [39.936812232884954]
We propose an end-to-end pipeline for building drivable representations for clothing.
A Garment Avatar is an expressive and fully-drivable geometry model for a piece of clothing.
We demonstrate the efficacy of our pipeline on a realistic virtual telepresence application.
arXiv Detail & Related papers (2022-06-07T15:06:55Z) - Learning Motion-Dependent Appearance for High-Fidelity Rendering of
Dynamic Humans from a Single Camera [49.357174195542854]
A key challenge of learning the dynamics of the appearance lies in the requirement of a prohibitively large amount of observations.
We show that our method can generate a temporally coherent video of dynamic humans for unseen body poses and novel views given a single view video.
arXiv Detail & Related papers (2022-03-24T00:22:03Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - Dynamic Neural Garments [45.833166320896716]
We present a solution that takes in body joint motion to directly produce realistic dynamic garment image sequences.
Specifically, given the target joint motion sequence of an avatar, we propose dynamic neural garments to jointly simulate and render plausible dynamic garment appearance.
arXiv Detail & Related papers (2021-02-23T17:21:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.