The Power of Points for Modeling Humans in Clothing
- URL: http://arxiv.org/abs/2109.01137v1
- Date: Thu, 2 Sep 2021 17:58:45 GMT
- Title: The Power of Points for Modeling Humans in Clothing
- Authors: Qianli Ma and Jinlong Yang and Siyu Tang and Michael J. Black
- Abstract summary: Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
- Score: 60.00557674969284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently it requires an artist to create 3D human avatars with realistic
clothing that can move naturally. Despite progress on 3D scanning and modeling
of human bodies, there is still no technology that can easily turn a static
scan into an animatable avatar. Automating the creation of such avatars would
enable many applications in games, social networking, animation, and AR/VR to
name a few. The key problem is one of representation. Standard 3D meshes are
widely used in modeling the minimally-clothed body but do not readily capture
the complex topology of clothing. Recent interest has shifted to implicit
surface models for this task but they are computationally heavy and lack
compatibility with existing 3D tools. What is needed is a 3D representation
that can capture varied topology at high resolution and that can be learned
from data. We argue that this representation has been with us all along -- the
point cloud. Point clouds have properties of both implicit and explicit
representations that we exploit to model 3D garment geometry on a human body.
We train a neural network with a novel local clothing geometric feature to
represent the shape of different outfits. The network is trained from 3D point
clouds of many types of clothing, on many bodies, in many poses, and learns to
model pose-dependent clothing deformations. The geometry feature can be
optimized to fit a previously unseen scan of a person in clothing, enabling the
scan to be reposed realistically. Our model demonstrates superior quantitative
and qualitative results in both multi-outfit modeling and unseen outfit
animation. The code is available for research purposes.
Related papers
- PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - Dynamic Appearance Modeling of Clothed 3D Human Avatars using a Single
Camera [8.308263758475938]
We introduce a method for high-quality modeling of clothed 3D human avatars using a video of a person with dynamic movements.
For explicit modeling, a neural network learns to generate point-wise shape residuals and appearance features of a 3D body model.
For implicit modeling, an implicit network combines the appearance and 3D motion features to decode high-fidelity clothed 3D human avatars.
arXiv Detail & Related papers (2023-12-28T06:04:39Z) - Realistic, Animatable Human Reconstructions for Virtual Fit-On [0.7649716717097428]
We present an end-to-end virtual try-on pipeline, that can fit different clothes on a personalized 3-D human model.
Our main idea is to construct an animatable 3-D human model and try-on different clothes in a 3-D virtual environment.
arXiv Detail & Related papers (2022-10-16T13:36:24Z) - Capturing and Animation of Body and Clothing from Monocular Video [105.87228128022804]
We present SCARF, a hybrid model combining a mesh-based body with a neural radiance field.
integrating the mesh into the rendering enables us to optimize SCARF directly from monocular videos.
We demonstrate that SCARFs clothing with higher visual quality than existing methods, that the clothing deforms with changing body pose and body shape, and that clothing can be successfully transferred between avatars of different subjects.
arXiv Detail & Related papers (2022-10-04T19:34:05Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.