HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics
- URL: http://arxiv.org/abs/2212.07242v3
- Date: Fri, 16 Jun 2023 09:01:34 GMT
- Title: HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics
- Authors: Artur Grigorev, Bernhard Thomaszewski, Michael J. Black, Otmar
Hilliges
- Abstract summary: Our method is agnostic to body shape and applies to tight-fitting garments as well as loose, free-flowing clothing.
As one key contribution, we propose a hierarchical message-passing scheme that efficiently propagates stiff stretching modes.
- Score: 84.29846699151288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method that leverages graph neural networks, multi-level message
passing, and unsupervised training to enable real-time prediction of realistic
clothing dynamics. Whereas existing methods based on linear blend skinning must
be trained for specific garments, our method is agnostic to body shape and
applies to tight-fitting garments as well as loose, free-flowing clothing. Our
method furthermore handles changes in topology (e.g., garments with buttons or
zippers) and material properties at inference time. As one key contribution, we
propose a hierarchical message-passing scheme that efficiently propagates stiff
stretching modes while preserving local detail. We empirically show that our
method outperforms strong baselines quantitatively and that its results are
perceived as more realistic than state-of-the-art methods.
Related papers
- Neural Garment Dynamics via Manifold-Aware Transformers [26.01911475040001]
We take a different approach and model the dynamics of a garment by exploiting its local interactions with the underlying human body.
Specifically, as the body moves, we detect local garment-body collisions, which drive the deformation of the garment.
At the core of our approach is a mesh-agnostic garment representation and a manifold-aware transformer network design.
arXiv Detail & Related papers (2024-05-13T11:05:52Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Towards Loose-Fitting Garment Animation via Generative Model of
Deformation Decomposition [4.627632792164547]
We develop a garment generative model based on deformation decomposition to efficiently simulate loose garment deformation without using linear skinning.
We demonstrate our method outperforms state-of-the-art data-driven alternatives through extensive experiments and show qualitative and quantitative analysis of results.
arXiv Detail & Related papers (2023-12-22T11:26:51Z) - GAPS: Geometry-Aware, Physics-Based, Self-Supervised Neural Garment Draping [8.60320342646772]
Recent neural, physics-based modeling of garment deformations allows faster and visually aesthetic results.
Material-specific parameters are used by the formulation to control the garment inextensibility.
We propose a geometry-aware garment skinning method by defining a body-garment closeness measure.
arXiv Detail & Related papers (2023-12-03T19:21:53Z) - Garment Recovery with Shape and Deformation Priors [51.41962835642731]
We propose a method that delivers realistic garment models from real-world images, regardless of garment shape or deformation.
Not only does our approach recover the garment geometry accurately, it also yields models that can be directly used by downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-17T07:06:21Z) - SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging
Garments [6.821050909555717]
We present a spectrum-inspired learning-based approach for generating clothing deformations with dynamic effects and personalized details.
Our proposed method overcomes limitations by providing a unified framework that predicts dynamic behavior for different garments.
We develop a dynamic clothing deformation estimator that integrates frequency-controllable attention mechanisms with long short-term memory.
arXiv Detail & Related papers (2023-08-05T09:09:50Z) - DIG: Draping Implicit Garment over the Human Body [56.68349332089129]
We propose an end-to-end differentiable pipeline that represents garments using implicit surfaces and learns a skinning field conditioned on shape and pose parameters of an articulated body model.
We show that our method, thanks to its end-to-end differentiability, allows to recover body and garments parameters jointly from image observations.
arXiv Detail & Related papers (2022-09-22T08:13:59Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - Powerpropagation: A sparsity inducing weight reparameterisation [65.85142037667065]
We introduce Powerpropagation, a new weight- parameterisation for neural networks that leads to inherently sparse models.
Models trained in this manner exhibit similar performance, but have a distribution with markedly higher density at zero, allowing more parameters to be pruned safely.
Here, we combine Powerpropagation with a traditional weight-pruning technique as well as recent state-of-the-art sparse-to-sparse algorithms, showing superior performance on the ImageNet benchmark.
arXiv Detail & Related papers (2021-10-01T10:03:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.