Dynamic Neural Garments
- URL: http://arxiv.org/abs/2102.11811v1
- Date: Tue, 23 Feb 2021 17:21:21 GMT
- Title: Dynamic Neural Garments
- Authors: Meng Zhang, Duygu Ceylan, Tuanfeng Wang, Niloy J. Mitra
- Abstract summary: We present a solution that takes in body joint motion to directly produce realistic dynamic garment image sequences.
Specifically, given the target joint motion sequence of an avatar, we propose dynamic neural garments to jointly simulate and render plausible dynamic garment appearance.
- Score: 45.833166320896716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A vital task of the wider digital human effort is the creation of realistic
garments on digital avatars, both in the form of characteristic fold patterns
and wrinkles in static frames as well as richness of garment dynamics under
avatars' motion. Existing workflow of modeling, simulation, and rendering
closely replicates the physics behind real garments, but is tedious and
requires repeating most of the workflow under changes to characters' motion,
camera angle, or garment resizing. Although data-driven solutions exist, they
either focus on static scenarios or only handle dynamics of tight garments. We
present a solution that, at test time, takes in body joint motion to directly
produce realistic dynamic garment image sequences. Specifically, given the
target joint motion sequence of an avatar, we propose dynamic neural garments
to jointly simulate and render plausible dynamic garment appearance from an
unseen viewpoint. Technically, our solution generates a coarse garment proxy
sequence, learns deep dynamic features attached to this template, and neurally
renders the features to produce appearance changes such as folds, wrinkles, and
silhouettes. We demonstrate generalization behavior to both unseen motion and
unseen camera views. Further, our network can be fine-tuned to adopt to new
body shape and/or background images. We also provide comparisons against
existing neural rendering and image sequence translation approaches, and report
clear quantitative improvements.
Related papers
- Garment Animation NeRF with Color Editing [6.357662418254495]
We propose a novel approach to synthesize garment animations from body motion sequences without the need for an explicit garment proxy.
Our approach infers garment dynamic features from body motion, providing a preliminary overview of garment structure.
We demonstrate the generalizability of our method across unseen body motions and camera views, ensuring detailed structural consistency.
arXiv Detail & Related papers (2024-07-29T08:17:05Z) - PICA: Physics-Integrated Clothed Avatar [30.277983921620663]
We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing.
Our method achieves high-fidelity rendering of human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
arXiv Detail & Related papers (2024-07-07T10:23:21Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Motion Guided Deep Dynamic 3D Garments [45.711340917768766]
We focus on motion guided dynamic 3D garments, especially for loose garments.
In a data-driven setup, we first learn a generative space of plausible garment geometries.
We show improvements over multiple state-of-the-art alternatives.
arXiv Detail & Related papers (2022-09-23T07:17:46Z) - Drivable Volumetric Avatars using Texel-Aligned Features [52.89305658071045]
Photo telepresence requires both high-fidelity body modeling and faithful driving to enable dynamically synthesized appearance.
We propose an end-to-end framework that addresses two core challenges in modeling and driving full-body avatars of real people.
arXiv Detail & Related papers (2022-07-20T09:28:16Z) - Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing [49.96406805006839]
We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
arXiv Detail & Related papers (2022-06-30T17:58:20Z) - Learning Motion-Dependent Appearance for High-Fidelity Rendering of
Dynamic Humans from a Single Camera [49.357174195542854]
A key challenge of learning the dynamics of the appearance lies in the requirement of a prohibitively large amount of observations.
We show that our method can generate a temporally coherent video of dynamic humans for unseen body poses and novel views given a single view video.
arXiv Detail & Related papers (2022-03-24T00:22:03Z) - Real-time Deep Dynamic Characters [95.5592405831368]
We propose a deep videorealistic 3D human character model displaying highly realistic shape, motion, and dynamic appearance.
We use a novel graph convolutional network architecture to enable motion-dependent deformation learning of body and clothing.
We show that our model creates motion-dependent surface deformations, physically plausible dynamic clothing deformations, as well as video-realistic surface textures at a much higher level of detail than previous state of the art approaches.
arXiv Detail & Related papers (2021-05-04T23:28:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.