Point-Based Modeling of Human Clothing
- URL: http://arxiv.org/abs/2104.08230v1
- Date: Fri, 16 Apr 2021 17:12:33 GMT
- Title: Point-Based Modeling of Human Clothing
- Authors: Ilya Zakharkin, Kirill Mazur, Artur Grigoriev, Victor Lempitsky
- Abstract summary: We learn a deep model that can predict point clouds of various outfits, for various human poses and for various human body shapes.
Using the learned model, we can infer geometry of new outfits from as little as a singe image.
We complement our geometric model with appearance modeling that uses the point cloud geometry as a geometric scaffolding.
- Score: 1.7842332554022693
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose a new approach to human clothing modeling based on point clouds.
Within this approach, we learn a deep model that can predict point clouds of
various outfits, for various human poses and for various human body shapes.
Notably, outfits of various types and topologies can be handled by the same
model. Using the learned model, we can infer geometry of new outfits from as
little as a singe image, and perform outfit retargeting to new bodies in new
poses. We complement our geometric model with appearance modeling that uses the
point cloud geometry as a geometric scaffolding, and employs neural point-based
graphics to capture outfit appearance from videos and to re-render the captured
outfits. We validate both geometric modeling and appearance modeling aspects of
the proposed approach against recently proposed methods, and establish the
viability of point-based clothing modeling.
Related papers
- PocoLoco: A Point Cloud Diffusion Model of Human Shape in Loose Clothing [97.83361232792214]
PocoLoco is the first template-free, point-based, pose-conditioned generative model for 3D humans in loose clothing.
We formulate avatar clothing deformation as a conditional point-cloud generation task within the denoising diffusion framework.
We release a dataset of two subjects performing various poses in loose clothing with a total of 75K point clouds.
arXiv Detail & Related papers (2024-11-06T20:42:13Z) - Neural-ABC: Neural Parametric Models for Articulated Body with Clothes [29.04941764336255]
We introduce Neural-ABC, a novel model that can represent clothed human bodies with disentangled latent spaces for identity, clothing, shape, and pose.
Our model excels at disentangling clothing and identity in different shape and poses while preserving the style of the clothing.
Compared to other state-of-the-art parametric models, Neural-ABC demonstrates powerful advantages in the reconstruction of clothed human bodies.
arXiv Detail & Related papers (2024-04-06T16:29:10Z) - AniDress: Animatable Loose-Dressed Avatar from Sparse Views Using
Garment Rigging Model [58.035758145894846]
We introduce AniDress, a novel method for generating animatable human avatars in loose clothes using very sparse multi-view videos.
A pose-driven deformable neural radiance field conditioned on both body and garment motions is introduced, providing explicit control of both parts.
Our method is able to render natural garment dynamics that deviate highly from the body and well to generalize to both unseen views and poses.
arXiv Detail & Related papers (2024-01-27T08:48:18Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - imGHUM: Implicit Generative Models of 3D Human Shape and Articulated
Pose [42.4185273307021]
We present imGHUM, the first holistic generative model of 3D human shape and articulated pose.
We model the full human body implicitly as a function zero-level-set and without the use of an explicit template mesh.
arXiv Detail & Related papers (2021-08-24T17:08:28Z) - Neural-GIF: Neural Generalized Implicit Functions for Animating People
in Clothing [49.32522765356914]
We learn to animate people in clothing as a function of the body pose.
We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects.
Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations.
arXiv Detail & Related papers (2021-08-19T17:25:16Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.