Towards Multi-Layered 3D Garments Animation
- URL: http://arxiv.org/abs/2305.10418v1
- Date: Wed, 17 May 2023 17:53:04 GMT
- Title: Towards Multi-Layered 3D Garments Animation
- Authors: Yidi Shao, Chen Change Loy, Bo Dai
- Abstract summary: Existing approaches mostly focus on single-layered garments driven by only human bodies and struggle to handle general scenarios.
We propose a novel data-driven method, called LayersNet, to model garment-level animations as particle-wise interactions in a micro physics system.
Our experiments show that LayersNet achieves superior performance both quantitatively and qualitatively.
- Score: 135.77656965678196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mimicking realistic dynamics in 3D garment animations is a challenging task
due to the complex nature of multi-layered garments and the variety of outer
forces involved. Existing approaches mostly focus on single-layered garments
driven by only human bodies and struggle to handle general scenarios. In this
paper, we propose a novel data-driven method, called LayersNet, to model
garment-level animations as particle-wise interactions in a micro physics
system. We improve simulation efficiency by representing garments as
patch-level particles in a two-level structural hierarchy. Moreover, we
introduce a novel Rotation Equivalent Transformation that leverages the
rotation invariance and additivity of physics systems to better model outer
forces. To verify the effectiveness of our approach and bridge the gap between
experimental environments and real-world scenarios, we introduce a new
challenging dataset, D-LAYERS, containing 700K frames of dynamics of 4,900
different combinations of multi-layered garments driven by both human bodies
and randomly sampled wind. Our experiments show that LayersNet achieves
superior performance both quantitatively and qualitatively. We will make the
dataset and code publicly available at
https://mmlab-ntu.github.io/project/layersnet/index.html .
Related papers
- PICA: Physics-Integrated Clothed Avatar [30.277983921620663]
We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing.
Our method achieves high-fidelity rendering of human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
arXiv Detail & Related papers (2024-07-07T10:23:21Z) - Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - NVFi: Neural Velocity Fields for 3D Physics Learning from Dynamic Videos [8.559809421797784]
We propose to simultaneously learn the geometry, appearance, and physical velocity of 3D scenes only from video frames.
We conduct extensive experiments on multiple datasets, demonstrating the superior performance of our method over all baselines.
arXiv Detail & Related papers (2023-12-11T14:07:31Z) - MoDi: Unconditional Motion Synthesis from Diverse Data [51.676055380546494]
We present MoDi, an unconditional generative model that synthesizes diverse motions.
Our model is trained in a completely unsupervised setting from a diverse, unstructured and unlabeled motion dataset.
We show that despite the lack of any structure in the dataset, the latent space can be semantically clustered.
arXiv Detail & Related papers (2022-06-16T09:06:25Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z) - Hindsight for Foresight: Unsupervised Structured Dynamics Models from
Physical Interaction [24.72947291987545]
Key challenge for an agent learning to interact with the world is to reason about physical properties of objects.
We propose a novel approach for modeling the dynamics of a robot's interactions directly from unlabeled 3D point clouds and images.
arXiv Detail & Related papers (2020-08-02T11:04:49Z) - ThreeDWorld: A Platform for Interactive Multi-Modal Physical Simulation [75.0278287071591]
ThreeDWorld (TDW) is a platform for interactive multi-modal physical simulation.
TDW enables simulation of high-fidelity sensory data and physical interactions between mobile agents and objects in rich 3D environments.
We present initial experiments enabled by TDW in emerging research directions in computer vision, machine learning, and cognitive science.
arXiv Detail & Related papers (2020-07-09T17:33:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.