SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging
Garments
- URL: http://arxiv.org/abs/2308.02827v1
- Date: Sat, 5 Aug 2023 09:09:50 GMT
- Title: SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging
Garments
- Authors: Tianxing Li, Rui Shi, Qing Zhu, Takashi Kanai
- Abstract summary: We present a spectrum-inspired learning-based approach for generating clothing deformations with dynamic effects and personalized details.
Our proposed method overcomes limitations by providing a unified framework that predicts dynamic behavior for different garments.
We develop a dynamic clothing deformation estimator that integrates frequency-controllable attention mechanisms with long short-term memory.
- Score: 6.821050909555717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our work presents a novel spectrum-inspired learning-based approach for
generating clothing deformations with dynamic effects and personalized details.
Existing methods in the field of clothing animation are limited to either
static behavior or specific network models for individual garments, which
hinders their applicability in real-world scenarios where diverse animated
garments are required. Our proposed method overcomes these limitations by
providing a unified framework that predicts dynamic behavior for different
garments with arbitrary topology and looseness, resulting in versatile and
realistic deformations. First, we observe that the problem of bias towards low
frequency always hampers supervised learning and leads to overly smooth
deformations. To address this issue, we introduce a frequency-control strategy
from a spectral perspective that enhances the generation of high-frequency
details of the deformation. In addition, to make the network highly
generalizable and able to learn various clothing deformations effectively, we
propose a spectral descriptor to achieve a generalized description of the
global shape information. Building on the above strategies, we develop a
dynamic clothing deformation estimator that integrates frequency-controllable
attention mechanisms with long short-term memory. The estimator takes as input
expressive features from garments and human bodies, allowing it to
automatically output continuous deformations for diverse clothing types,
independent of mesh topology or vertex count. Finally, we present a neural
collision handling method to further enhance the realism of garments. Our
experimental results demonstrate the effectiveness of our approach on a variety
of free-swinging garments and its superiority over state-of-the-art methods.
Related papers
- AnyFit: Controllable Virtual Try-on for Any Combination of Attire Across Any Scenario [50.62711489896909]
AnyFit surpasses all baselines on high-resolution benchmarks and real-world data by a large gap.
AnyFit's impressive performance on high-fidelity virtual try-ons in any scenario from any image, paves a new path for future research within the fashion community.
arXiv Detail & Related papers (2024-05-28T13:33:08Z) - Neural Garment Dynamics via Manifold-Aware Transformers [26.01911475040001]
We take a different approach and model the dynamics of a garment by exploiting its local interactions with the underlying human body.
Specifically, as the body moves, we detect local garment-body collisions, which drive the deformation of the garment.
At the core of our approach is a mesh-agnostic garment representation and a manifold-aware transformer network design.
arXiv Detail & Related papers (2024-05-13T11:05:52Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Towards Loose-Fitting Garment Animation via Generative Model of
Deformation Decomposition [4.627632792164547]
We develop a garment generative model based on deformation decomposition to efficiently simulate loose garment deformation without using linear skinning.
We demonstrate our method outperforms state-of-the-art data-driven alternatives through extensive experiments and show qualitative and quantitative analysis of results.
arXiv Detail & Related papers (2023-12-22T11:26:51Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - Towards Multi-Layered 3D Garments Animation [135.77656965678196]
Existing approaches mostly focus on single-layered garments driven by only human bodies and struggle to handle general scenarios.
We propose a novel data-driven method, called LayersNet, to model garment-level animations as particle-wise interactions in a micro physics system.
Our experiments show that LayersNet achieves superior performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-05-17T17:53:04Z) - HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics [84.29846699151288]
Our method is agnostic to body shape and applies to tight-fitting garments as well as loose, free-flowing clothing.
As one key contribution, we propose a hierarchical message-passing scheme that efficiently propagates stiff stretching modes.
arXiv Detail & Related papers (2022-12-14T14:24:00Z) - Detail-aware Deep Clothing Animations Infused with Multi-source
Attributes [1.6400484152578603]
This paper presents a novel learning-based clothing deformation method to generate rich and reasonable detailed deformations for garments worn by bodies of various shapes in various animations.
In contrast to existing learning-based methods, which require numerous trained models for different garment topologies or poses, we use a unified framework to produce high fidelity deformations efficiently and easily.
Experiment results show that our proposed deformation method achieves better performance over existing methods in terms of generalization of ability and quality of details.
arXiv Detail & Related papers (2021-12-15T08:50:49Z) - Towards Scalable Unpaired Virtual Try-On via Patch-Routed
Spatially-Adaptive GAN [66.3650689395967]
We propose a texture-preserving end-to-end network, the PAtch-routed SpaTially-Adaptive GAN (PASTA-GAN), that facilitates real-world unpaired virtual try-on.
To disentangle the style and spatial information of each garment, PASTA-GAN consists of an innovative patch-routed disentanglement module.
arXiv Detail & Related papers (2021-11-20T08:36:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.