CTSN: Predicting Cloth Deformation for Skeleton-based Characters with a
Two-stream Skinning Network
- URL: http://arxiv.org/abs/2305.18808v1
- Date: Tue, 30 May 2023 07:48:47 GMT
- Title: CTSN: Predicting Cloth Deformation for Skeleton-based Characters with a
Two-stream Skinning Network
- Authors: Yudi Li and Min Tang and Yun Yang and Ruofeng Tong and Shuangcai Yang
and Yao Li and Bailin An and Qilong Kou
- Abstract summary: We present a novel learning method to predict the cloth deformation for skeleton-based characters with a two-stream network.
Characters processed in our approach are not limited to humans, and can be other skeletal-based representations of non-human targets such as fish or pets.
- Score: 11.10457709073575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel learning method to predict the cloth deformation for
skeleton-based characters with a two-stream network. The characters processed
in our approach are not limited to humans, and can be other skeletal-based
representations of non-human targets such as fish or pets. We use a novel
network architecture which consists of skeleton-based and mesh-based residual
networks to learn the coarse and wrinkle features as the overall residual from
the template cloth mesh. Our network is used to predict the deformation for
loose or tight-fitting clothing or dresses. We ensure that the memory footprint
of our network is low, and thereby result in reduced storage and computational
requirements. In practice, our prediction for a single cloth mesh for the
skeleton-based character takes about 7 milliseconds on an NVIDIA GeForce RTX
3090 GPU. Compared with prior methods, our network can generate fine
deformation results with details and wrinkles.
Related papers
- Neural Metamorphosis [72.88137795439407]
This paper introduces a new learning paradigm termed Neural Metamorphosis (NeuMeta), which aims to build self-morphable neural networks.
NeuMeta directly learns the continuous weight manifold of neural networks.
It sustains full-size performance even at a 75% compression rate.
arXiv Detail & Related papers (2024-10-10T14:49:58Z) - SkinningNet: Two-Stream Graph Convolutional Neural Network for Skinning
Prediction of Synthetic Characters [0.8629912408966145]
SkinningNet is an end-to-end Two-Stream Graph Neural Network architecture that computes skinning weights from an input mesh and its associated skeleton.
The proposed method extracts this information in an end-to-end learnable fashion by jointly learning the best relationship between mesh and skeleton joints.
arXiv Detail & Related papers (2022-03-09T14:26:10Z) - N-Cloth: Predicting 3D Cloth Deformation with Mesh-Based Networks [69.94313958962165]
We present a novel mesh-based learning approach (N-Cloth) for plausible 3D cloth deformation prediction.
We use graph convolution to transform the cloth and object meshes into a latent space to reduce the non-linearity in the mesh space.
Our approach can handle complex cloth meshes with up to $100$K triangles and scenes with various objects corresponding to SMPL humans, Non-SMPL humans, or rigid bodies.
arXiv Detail & Related papers (2021-12-13T03:13:11Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Point-Cloud Deep Learning of Porous Media for Permeability Prediction [0.0]
We propose a novel deep learning framework for predicting permeability of porous media from their digital images.
We model the boundary between solid matrix and pore spaces as point clouds and feed them as inputs to a neural network based on the PointNet architecture.
arXiv Detail & Related papers (2021-07-18T22:59:21Z) - Learning Skeletal Articulations with Neural Blend Shapes [57.879030623284216]
We develop a neural technique for articulating 3D characters using enveloping with a pre-defined skeletal structure.
Our framework learns to rig and skin characters with the same articulation structure.
We propose neural blend shapes which improve the deformation quality in the joint regions.
arXiv Detail & Related papers (2021-05-06T05:58:13Z) - GarNet++: Improving Fast and Accurate Static3D Cloth Draping by
Curvature Loss [89.96698250086064]
We introduce a two-stream deep network model that produces a visually plausible draping of a template cloth on virtual 3D bodies.
Our network learns to mimic a Physics-Based Simulation (PBS) method while requiring two orders of magnitude less computation time.
We validate our framework on four garment types for various body shapes and poses.
arXiv Detail & Related papers (2020-07-20T13:40:15Z) - TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape
and Garment Style [43.99803542307155]
We present TailorNet, a neural model which predicts clothing deformation in 3D as a function of three factors: pose, shape and style.
Our hypothesis is that (even non-linear) combinations of examples smooth out high frequency components such as fine-wrinkles.
Several experiments demonstrate TailorNet produces more realistic results than prior work, and even generates temporally coherent deformations.
arXiv Detail & Related papers (2020-03-10T08:49:51Z) - Neural Human Video Rendering by Learning Dynamic Textures and
Rendering-to-Video Translation [99.64565200170897]
We propose a novel human video synthesis method by explicitly disentangling the learning of time-coherent fine-scale details from the embedding of the human in 2D screen space.
We show several applications of our approach, such as human reenactment and novel view synthesis from monocular video, where we show significant improvement over the state of the art both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-01-14T18:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.