Detail-aware Deep Clothing Animations Infused with Multi-source
Attributes
- URL: http://arxiv.org/abs/2112.07974v1
- Date: Wed, 15 Dec 2021 08:50:49 GMT
- Title: Detail-aware Deep Clothing Animations Infused with Multi-source
Attributes
- Authors: Tianxing Li, Rui Shi, Takashi Kanai
- Abstract summary: This paper presents a novel learning-based clothing deformation method to generate rich and reasonable detailed deformations for garments worn by bodies of various shapes in various animations.
In contrast to existing learning-based methods, which require numerous trained models for different garment topologies or poses, we use a unified framework to produce high fidelity deformations efficiently and easily.
Experiment results show that our proposed deformation method achieves better performance over existing methods in terms of generalization of ability and quality of details.
- Score: 1.6400484152578603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel learning-based clothing deformation method to
generate rich and reasonable detailed deformations for garments worn by bodies
of various shapes in various animations. In contrast to existing learning-based
methods, which require numerous trained models for different garment topologies
or poses and are unable to easily realize rich details, we use a unified
framework to produce high fidelity deformations efficiently and easily. To
address the challenging issue of predicting deformations influenced by
multi-source attributes, we propose three strategies from novel perspectives.
Specifically, we first found that the fit between the garment and the body has
an important impact on the degree of folds. We then designed an attribute
parser to generate detail-aware encodings and infused them into the graph
neural network, therefore enhancing the discrimination of details under diverse
attributes. Furthermore, to achieve better convergence and avoid overly smooth
deformations, we proposed output reconstruction to mitigate the complexity of
the learning task. Experiment results show that our proposed deformation method
achieves better performance over existing methods in terms of generalization
ability and quality of details.
Related papers
- Deep ContourFlow: Advancing Active Contours with Deep Learning [3.9948520633731026]
We present a framework for both unsupervised and one-shot approaches for image segmentation.
It is capable of capturing complex object boundaries without the need for extensive labeled training data.
This is particularly required in histology, a field facing a significant shortage of annotations.
arXiv Detail & Related papers (2024-07-15T13:12:34Z) - Towards Loose-Fitting Garment Animation via Generative Model of
Deformation Decomposition [4.627632792164547]
We develop a garment generative model based on deformation decomposition to efficiently simulate loose garment deformation without using linear skinning.
We demonstrate our method outperforms state-of-the-art data-driven alternatives through extensive experiments and show qualitative and quantitative analysis of results.
arXiv Detail & Related papers (2023-12-22T11:26:51Z) - CLAP: Isolating Content from Style through Contrastive Learning with Augmented Prompts [11.752632557524969]
We propose contrastive learning with data augmentation to disentangle content features from the original representations.
Our experiments across diverse datasets demonstrate significant improvements in zero-shot and few-shot classification tasks.
arXiv Detail & Related papers (2023-11-28T03:00:59Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - Semantic-Aware Implicit Template Learning via Part Deformation
Consistency [18.63665468429503]
We propose a semantic-aware implicit template learning framework to enable semantically plausible deformation.
By leveraging semantic prior from a self-supervised feature extractor, we suggest local conditioning with novel semantic-aware deformation code.
Our experiments demonstrate the superiority of the proposed method over baselines in various tasks.
arXiv Detail & Related papers (2023-08-23T05:02:17Z) - SwinGar: Spectrum-Inspired Neural Dynamic Deformation for Free-Swinging
Garments [6.821050909555717]
We present a spectrum-inspired learning-based approach for generating clothing deformations with dynamic effects and personalized details.
Our proposed method overcomes limitations by providing a unified framework that predicts dynamic behavior for different garments.
We develop a dynamic clothing deformation estimator that integrates frequency-controllable attention mechanisms with long short-term memory.
arXiv Detail & Related papers (2023-08-05T09:09:50Z) - HOOD: Hierarchical Graphs for Generalized Modelling of Clothing Dynamics [84.29846699151288]
Our method is agnostic to body shape and applies to tight-fitting garments as well as loose, free-flowing clothing.
As one key contribution, we propose a hierarchical message-passing scheme that efficiently propagates stiff stretching modes.
arXiv Detail & Related papers (2022-12-14T14:24:00Z) - Multiscale Mesh Deformation Component Analysis with Attention-based
Autoencoders [49.62443496989065]
We propose a novel method to exact multiscale deformation components automatically with a stacked attention-based autoencoder.
The attention mechanism is designed to learn to softly weight multi-scale deformation components in active deformation regions.
With our method, the user can edit shapes in a coarse-to-fine fashion which facilitates effective modeling of new shapes.
arXiv Detail & Related papers (2020-12-04T08:30:57Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Fine-grained Image-to-Image Transformation towards Visual Recognition [102.51124181873101]
We aim at transforming an image with a fine-grained category to synthesize new images that preserve the identity of the input image.
We adopt a model based on generative adversarial networks to disentangle the identity related and unrelated factors of an image.
Experiments on the CompCars and Multi-PIE datasets demonstrate that our model preserves the identity of the generated images much better than the state-of-the-art image-to-image transformation models.
arXiv Detail & Related papers (2020-01-12T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.