gDNA: Towards Generative Detailed Neural Avatars
- URL: http://arxiv.org/abs/2201.04123v1
- Date: Tue, 11 Jan 2022 18:46:38 GMT
- Title: gDNA: Towards Generative Detailed Neural Avatars
- Authors: Xu Chen, Tianjian Jiang, Jie Song, Jinlong Yang, Michael J. Black,
Andreas Geiger, Otmar Hilliges
- Abstract summary: We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
- Score: 94.9804106939663
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To make 3D human avatars widely available, we must be able to generate a
variety of 3D virtual humans with varied identities and shapes in arbitrary
poses. This task is challenging due to the diversity of clothed body shapes,
their complex articulations, and the resulting rich, yet stochastic geometric
detail in clothing. Hence, current methods to represent 3D people do not
provide a full generative model of people in clothing. In this paper, we
propose a novel method that learns to generate detailed 3D shapes of people in
a variety of garments with corresponding skinning weights. Specifically, we
devise a multi-subject forward skinning module that is learned from only a few
posed, un-rigged scans per subject. To capture the stochastic nature of
high-frequency details in garments, we leverage an adversarial loss formulation
that encourages the model to capture the underlying statistics. We provide
empirical evidence that this leads to realistic generation of local details
such as wrinkles. We show that our model is able to generate natural human
avatars wearing diverse and detailed clothing. Furthermore, we show that our
method can be used on the task of fitting human models to raw scans,
outperforming the previous state-of-the-art.
Related papers
- Design2Cloth: 3D Cloth Generation from 2D Masks [34.80461276448817]
We propose Design2Cloth, a high fidelity 3D generative model trained on a real world dataset from more than 2000 subject scans.
Under a series of both qualitative and quantitative experiments, we showcase that Design2Cloth outperforms current state-of-the-art cloth generative models by a large margin.
arXiv Detail & Related papers (2024-04-03T12:32:13Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.