Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing
- URL: http://arxiv.org/abs/2206.15470v1
- Date: Thu, 30 Jun 2022 17:58:20 GMT
- Title: Dressing Avatars: Deep Photorealistic Appearance for Physically
Simulated Clothing
- Authors: Donglai Xiang, Timur Bagautdinov, Tuur Stuyck, Fabian Prada, Javier
Romero, Weipeng Xu, Shunsuke Saito, Jingfan Guo, Breannan Smith, Takaaki
Shiratori, Yaser Sheikh, Jessica Hodgins, Chenglei Wu
- Abstract summary: We introduce pose-driven avatars with explicit modeling of clothing that exhibit both realistic clothing dynamics and photorealistic appearance learned from real-world data.
Our key contribution is a physically-inspired appearance network, capable of generating photorealistic appearance with view-dependent and dynamic shadowing effects even for unseen body-clothing configurations.
- Score: 49.96406805006839
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite recent progress in developing animatable full-body avatars, realistic
modeling of clothing - one of the core aspects of human self-expression -
remains an open challenge. State-of-the-art physical simulation methods can
generate realistically behaving clothing geometry at interactive rate. Modeling
photorealistic appearance, however, usually requires physically-based rendering
which is too expensive for interactive applications. On the other hand,
data-driven deep appearance models are capable of efficiently producing
realistic appearance, but struggle at synthesizing geometry of highly dynamic
clothing and handling challenging body-clothing configurations. To this end, we
introduce pose-driven avatars with explicit modeling of clothing that exhibit
both realistic clothing dynamics and photorealistic appearance learned from
real-world data. The key idea is to introduce a neural clothing appearance
model that operates on top of explicit geometry: at train time we use
high-fidelity tracking, whereas at animation time we rely on physically
simulated geometry. Our key contribution is a physically-inspired appearance
network, capable of generating photorealistic appearance with view-dependent
and dynamic shadowing effects even for unseen body-clothing configurations. We
conduct a thorough evaluation of our model and demonstrate diverse animation
results on several subjects and different types of clothing. Unlike previous
work on photorealistic full-body avatars, our approach can produce much richer
dynamics and more realistic deformations even for loose clothing. We also
demonstrate that our formulation naturally allows clothing to be used with
avatars of different people while staying fully animatable, thus enabling, for
the first time, photorealistic avatars with novel clothing.
Related papers
- PICA: Physics-Integrated Clothed Avatar [30.277983921620663]
We introduce PICA, a novel representation for high-fidelity animatable clothed human avatars with physics-accurate dynamics, even for loose clothing.
Our method achieves high-fidelity rendering of human bodies in complex and novel driving poses, significantly outperforming previous methods under the same settings.
arXiv Detail & Related papers (2024-07-07T10:23:21Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - CaPhy: Capturing Physical Properties for Animatable Human Avatars [44.95805736197971]
CaPhy is a novel method for reconstructing animatable human avatars with realistic dynamic properties for clothing.
We aim for capturing the geometric and physical properties of the clothing from real observations.
We combine unsupervised training with physics-based losses and 3D-supervised training using scanned data to reconstruct a dynamic model of clothing.
arXiv Detail & Related papers (2023-08-11T04:01:13Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - Garment Avatars: Realistic Cloth Driving using Pattern Registration [39.936812232884954]
We propose an end-to-end pipeline for building drivable representations for clothing.
A Garment Avatar is an expressive and fully-drivable geometry model for a piece of clothing.
We demonstrate the efficacy of our pipeline on a realistic virtual telepresence application.
arXiv Detail & Related papers (2022-06-07T15:06:55Z) - HSPACE: Synthetic Parametric Humans Animated in Complex Environments [67.8628917474705]
We build a large-scale photo-realistic dataset, Human-SPACE, of animated humans placed in complex indoor and outdoor environments.
We combine a hundred diverse individuals of varying ages, gender, proportions, and ethnicity, with hundreds of motions and scenes, in order to generate an initial dataset of over 1 million frames.
Assets are generated automatically, at scale, and are compatible with existing real time rendering and game engines.
arXiv Detail & Related papers (2021-12-23T22:27:55Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Dynamic Neural Garments [45.833166320896716]
We present a solution that takes in body joint motion to directly produce realistic dynamic garment image sequences.
Specifically, given the target joint motion sequence of an avatar, we propose dynamic neural garments to jointly simulate and render plausible dynamic garment appearance.
arXiv Detail & Related papers (2021-02-23T17:21:21Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.