N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks
- URL: http://arxiv.org/abs/2112.06397v1
- Date: Mon, 13 Dec 2021 03:13:11 GMT
- Title: N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks
- Authors: Yudi Li and Min Tang and Yun Yang and Zi Huang and Ruofeng Tong and
Shuangcai Yang and Yao Li and Dinesh Manocha
- Abstract summary: We present a novel mesh-based learning approach (N-Cloth) for plausible 3D cloth deformation prediction.
We use graph convolution to transform the cloth and object meshes into a latent space to reduce the non-linearity in the mesh space.
Our approach can handle complex cloth meshes with up to $100$K triangles and scenes with various objects corresponding to SMPL humans, Non-SMPL humans, or rigid bodies.
- Score: 69.94313958962165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel mesh-based learning approach (N-Cloth) for plausible 3D
cloth deformation prediction. Our approach is general and can handle cloth or
obstacles represented by triangle meshes with arbitrary topology. We use graph
convolution to transform the cloth and object meshes into a latent space to
reduce the non-linearity in the mesh space. Our network can predict the target
3D cloth mesh deformation based on the state of the initial cloth mesh template
and the target obstacle mesh. Our approach can handle complex cloth meshes with
up to $100$K triangles and scenes with various objects corresponding to SMPL
humans, Non-SMPL humans, or rigid bodies. In practice, our approach
demonstrates good temporal coherence between successive input frames and can be
used to generate plausible cloth simulation at $30-45$ fps on an NVIDIA GeForce
RTX 3090 GPU. We highlight its benefits over prior learning-based methods and
physically-based cloth simulators.
Related papers
- Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Mesh Neural Cellular Automata [62.101063045659906]
We propose Mesh Neural Cellular Automata (MeshNCA), a method that directly synthesizes dynamic textures on 3D meshes without requiring any UV maps.
Only trained on an Icosphere mesh, MeshNCA shows remarkable test-time generalization and can synthesize textures on unseen meshes in real time.
arXiv Detail & Related papers (2023-11-06T01:54:37Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - SAGA: Spectral Adversarial Geometric Attack on 3D Meshes [13.84270434088512]
A triangular mesh is one of the most popular 3D data representations.
We propose a novel framework for a geometric adversarial attack on a 3D mesh autoencoder.
arXiv Detail & Related papers (2022-11-24T19:29:04Z) - Surface-Aligned Neural Radiance Fields for Controllable 3D Human
Synthesis [4.597864989500202]
We propose a new method for reconstructing implicit 3D human models from sparse multi-view RGB videos.
Our method defines the neural scene representation on the mesh surface points and signed distances from the surface of a human body mesh.
arXiv Detail & Related papers (2022-01-05T16:25:32Z) - Mesh Draping: Parametrization-Free Neural Mesh Transfer [92.55503085245304]
Mesh Draping is a neural method for transferring existing mesh structure from one shape to another.
We show that by leveraging gradually increasing frequencies to guide the neural optimization, we are able to achieve stable and high quality mesh transfer.
arXiv Detail & Related papers (2021-10-11T17:24:52Z) - A Deep Emulator for Secondary Motion of 3D Characters [24.308088194689415]
We present a learning-based approach to enhance skinning-based animations of 3D characters with vivid secondary motion effects.
We design a neural network that encodes each local patch of a character simulation mesh.
Being a local method, our network generalizes to arbitrarily shaped 3D character meshes at test time.
arXiv Detail & Related papers (2021-03-01T19:13:35Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z) - MeshWalker: Deep Mesh Understanding by Random Walks [19.594977587417247]
We look at the most popular representation of 3D shapes in computer graphics - a triangular mesh - and ask how it can be utilized within deep learning.
This paper proposes a very different approach, termed MeshWalker, to learn the shape directly from a given mesh.
We show that our approach achieves state-of-the-art results for two fundamental shape analysis tasks.
arXiv Detail & Related papers (2020-06-09T15:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.