DeePSD: Automatic Deep Skinning And Pose Space Deformation For 3D
Garment Animation
- URL: http://arxiv.org/abs/2009.02715v2
- Date: Wed, 7 Apr 2021 09:19:24 GMT
- Title: DeePSD: Automatic Deep Skinning And Pose Space Deformation For 3D
Garment Animation
- Authors: Hugo Bertiche, Meysam Madadi, Emilio Tylson and Sergio Escalera
- Abstract summary: We present a novel solution to the garment animation problem through deep learning.
Our contribution allows animating any template outfit with arbitrary topology and geometric complexity.
- Score: 36.853993692722035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel solution to the garment animation problem through deep
learning. Our contribution allows animating any template outfit with arbitrary
topology and geometric complexity. Recent works develop models for garment
edition, resizing and animation at the same time by leveraging the support body
model (encoding garments as body homotopies). This leads to complex engineering
solutions that suffer from scalability, applicability and compatibility. By
limiting our scope to garment animation only, we are able to propose a simple
model that can animate any outfit, independently of its topology, vertex order
or connectivity. Our proposed architecture maps outfits to animated 3D models
into the standard format for 3D animation (blend weights and blend shapes
matrices), automatically providing of compatibility with any graphics engine.
We also propose a methodology to complement supervised learning with an
unsupervised physically based learning that implicitly solves collisions and
enhances cloth quality.
Related papers
- DiffusedWrinkles: A Diffusion-Based Model for Data-Driven Garment Animation [10.9550231281676]
We present a data-driven method for learning to generate animations of 3D garments using a 2D image diffusion model.
Our approach is able to synthesize high-quality 3D animations for a wide variety of garments and body shapes.
arXiv Detail & Related papers (2025-03-24T06:08:26Z) - Animating the Uncaptured: Humanoid Mesh Animation with Video Diffusion Models [71.78723353724493]
Animation of humanoid characters is essential in various graphics applications.
We propose an approach to synthesize 4D animated sequences of input static 3D humanoid meshes.
arXiv Detail & Related papers (2025-03-20T10:00:22Z) - Make-It-Animatable: An Efficient Framework for Authoring Animation-Ready 3D Characters [86.13319549186959]
We present Make-It-Animatable, a novel data-driven method to make any 3D humanoid model ready for character animation in less than one second.
Our framework generates high-quality blend weights, bones, and pose transformations.
Compared to existing methods, our approach demonstrates significant improvements in both quality and speed.
arXiv Detail & Related papers (2024-11-27T10:18:06Z) - Towards High-Quality 3D Motion Transfer with Realistic Apparel Animation [69.36162784152584]
We present a novel method aiming for high-quality motion transfer with realistic apparel animation.
We propose a data-driven pipeline that learns to disentangle body and apparel deformations via two neural deformation modules.
Our method produces results with superior quality for various types of apparel.
arXiv Detail & Related papers (2024-07-15T22:17:35Z) - MotionDreamer: Exploring Semantic Video Diffusion features for Zero-Shot 3D Mesh Animation [10.263762787854862]
We propose a technique for automatic re-animation of various 3D shapes based on a motion prior extracted from a video diffusion model.
We leverage an explicit mesh-based representation compatible with existing computer-graphics pipelines.
Our time-efficient zero-shot method achieves a superior performance re-animating a diverse set of 3D shapes.
arXiv Detail & Related papers (2024-05-30T15:30:38Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Ponymation: Learning Articulated 3D Animal Motions from Unlabeled Online Videos [47.97168047776216]
We introduce a new method for learning a generative model of articulated 3D animal motions from raw, unlabeled online videos.
Our model learns purely from a collection of unlabeled web video clips, leveraging semantic correspondences distilled from self-supervised image features.
arXiv Detail & Related papers (2023-12-21T06:44:18Z) - High-Quality Animatable Dynamic Garment Reconstruction from Monocular
Videos [51.8323369577494]
We propose the first method to recover high-quality animatable dynamic garments from monocular videos without depending on scanned data.
To generate reasonable deformations for various unseen poses, we propose a learnable garment deformation network.
We show that our method can reconstruct high-quality dynamic garments with coherent surface details, which can be easily animated under unseen poses.
arXiv Detail & Related papers (2023-11-02T13:16:27Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Going beyond Free Viewpoint: Creating Animatable Volumetric Video of
Human Performances [7.7824496657259665]
We present an end-to-end pipeline for the creation of high-quality animatable volumetric video content of human performances.
Semantic enrichment and geometric animation ability are achieved by establishing temporal consistency in the 3D data.
For pose editing, we exploit the captured data as much as possible and kinematically deform the captured frames to fit a desired pose.
arXiv Detail & Related papers (2020-09-02T09:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.