Towards Loose-Fitting Garment Animation via Generative Model of
Deformation Decomposition
- URL: http://arxiv.org/abs/2312.14619v1
- Date: Fri, 22 Dec 2023 11:26:51 GMT
- Title: Towards Loose-Fitting Garment Animation via Generative Model of
Deformation Decomposition
- Authors: Yifu Liu, Xiaoxia Li, Zhiling Luo, Wei Zhou
- Abstract summary: We develop a garment generative model based on deformation decomposition to efficiently simulate loose garment deformation without using linear skinning.
We demonstrate our method outperforms state-of-the-art data-driven alternatives through extensive experiments and show qualitative and quantitative analysis of results.
- Score: 4.627632792164547
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing data-driven methods for garment animation, usually driven by linear
skinning, although effective on tight garments, do not handle loose-fitting
garments with complex deformations well. To address these limitations, we
develop a garment generative model based on deformation decomposition to
efficiently simulate loose garment deformation without directly using linear
skinning. Specifically, we learn a garment generative space with the proposed
generative model, where we decouple the latent representation into unposed
deformed garments and dynamic offsets during the decoding stage. With explicit
garment deformations decomposition, our generative model is able to generate
complex pose-driven deformations on canonical garment shapes. Furthermore, we
learn to transfer the body motions and previous state of the garment to the
latent space to regenerate dynamic results. In addition, we introduce a detail
enhancement module in an adversarial training setup to learn high-frequency
wrinkles. We demonstrate our method outperforms state-of-the-art data-driven
alternatives through extensive experiments and show qualitative and
quantitative analysis of results.
Related papers
- Neural Garment Dynamics via Manifold-Aware Transformers [26.01911475040001]
We take a different approach and model the dynamics of a garment by exploiting its local interactions with the underlying human body.
Specifically, as the body moves, we detect local garment-body collisions, which drive the deformation of the garment.
At the core of our approach is a mesh-agnostic garment representation and a manifold-aware transformer network design.
arXiv Detail & Related papers (2024-05-13T11:05:52Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Garment Recovery with Shape and Deformation Priors [51.41962835642731]
We propose a method that delivers realistic garment models from real-world images, regardless of garment shape or deformation.
Not only does our approach recover the garment geometry accurately, it also yields models that can be directly used by downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-17T07:06:21Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging
Garments [6.821050909555717]
We present a spectrum-inspired learning-based approach for generating clothing deformations with dynamic effects and personalized details.
Our proposed method overcomes limitations by providing a unified framework that predicts dynamic behavior for different garments.
We develop a dynamic clothing deformation estimator that integrates frequency-controllable attention mechanisms with long short-term memory.
arXiv Detail & Related papers (2023-08-05T09:09:50Z) - HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics [84.29846699151288]
Our method is agnostic to body shape and applies to tight-fitting garments as well as loose, free-flowing clothing.
As one key contribution, we propose a hierarchical message-passing scheme that efficiently propagates stiff stretching modes.
arXiv Detail & Related papers (2022-12-14T14:24:00Z) - DIG: Draping Implicit Garment over the Human Body [56.68349332089129]
We propose an end-to-end differentiable pipeline that represents garments using implicit surfaces and learns a skinning field conditioned on shape and pose parameters of an articulated body model.
We show that our method, thanks to its end-to-end differentiability, allows to recover body and garments parameters jointly from image observations.
arXiv Detail & Related papers (2022-09-22T08:13:59Z) - Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion
Networks [63.596602299263935]
We present a learning algorithm that uses bone-driven motion networks to predict the deformation of loose-fitting garment meshes at interactive rates.
We show that our method outperforms state-of-the-art methods in terms of prediction accuracy of mesh deformations by about 20% in RMSE and 10% in Hausdorff distance and STED.
arXiv Detail & Related papers (2022-05-03T07:54:39Z) - Detail-aware Deep Clothing Animations Infused with Multi-source
Attributes [1.6400484152578603]
This paper presents a novel learning-based clothing deformation method to generate rich and reasonable detailed deformations for garments worn by bodies of various shapes in various animations.
In contrast to existing learning-based methods, which require numerous trained models for different garment topologies or poses, we use a unified framework to produce high fidelity deformations efficiently and easily.
Experiment results show that our proposed deformation method achieves better performance over existing methods in terms of generalization of ability and quality of details.
arXiv Detail & Related papers (2021-12-15T08:50:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.