Garment4D: Garment Reconstruction from Point Cloud Sequences
- URL: http://arxiv.org/abs/2112.04159v1
- Date: Wed, 8 Dec 2021 08:15:20 GMT
- Title: Garment4D: Garment Reconstruction from Point Cloud Sequences
- Authors: Fangzhou Hong, Liang Pan, Zhongang Cai, Ziwei Liu
- Abstract summary: Learning to reconstruct 3D garments is important for dressing 3D human bodies of different shapes in different poses.
Previous works typically rely on 2D images as input, which however suffer from the scale and pose ambiguities.
We propose a principled framework, Garment4D, that uses 3D point cloud sequences of dressed humans for garment reconstruction.
- Score: 12.86951061306046
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning to reconstruct 3D garments is important for dressing 3D human bodies
of different shapes in different poses. Previous works typically rely on 2D
images as input, which however suffer from the scale and pose ambiguities. To
circumvent the problems caused by 2D images, we propose a principled framework,
Garment4D, that uses 3D point cloud sequences of dressed humans for garment
reconstruction. Garment4D has three dedicated steps: sequential garments
registration, canonical garment estimation, and posed garment reconstruction.
The main challenges are two-fold: 1) effective 3D feature learning for fine
details, and 2) capture of garment dynamics caused by the interaction between
garments and the human body, especially for loose garments like skirts. To
unravel these problems, we introduce a novel Proposal-Guided Hierarchical
Feature Network and Iterative Graph Convolution Network, which integrate both
high-level semantic features and low-level geometric features for fine details
reconstruction. Furthermore, we propose a Temporal Transformer for smooth
garment motions capture. Unlike non-parametric methods, the reconstructed
garment meshes by our method are separable from the human body and have strong
interpretability, which is desirable for downstream tasks. As the first attempt
at this task, high-quality reconstruction results are qualitatively and
quantitatively illustrated through extensive experiments. Codes are available
at https://github.com/hongfz16/Garment4D.
Related papers
- LayGA: Layered Gaussian Avatars for Animatable Clothing Transfer [40.372917698238204]
We present Layered Gaussian Avatars (LayGA), a new representation that formulates body and clothing as two separate layers.
Our representation is built upon the Gaussian map-based avatar for its excellent representation power of garment details.
In the single-layer reconstruction stage, we propose a series of geometric constraints to reconstruct smooth surfaces.
In the multi-layer fitting stage, we train two separate models to represent body and clothing and utilize the reconstructed clothing geometries as 3D supervision.
arXiv Detail & Related papers (2024-05-12T16:11:28Z) - DI-Net : Decomposed Implicit Garment Transfer Network for Digital
Clothed 3D Human [75.45488434002898]
Existing 2D virtual try-on methods cannot be directly extended to 3D since they lack the ability to perceive the depth of each pixel.
We propose a Decomposed Implicit garment transfer network (DI-Net), which can effortlessly reconstruct a 3D human mesh with the newly try-on result.
arXiv Detail & Related papers (2023-11-28T14:28:41Z) - Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing [54.29207348918216]
Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
arXiv Detail & Related papers (2023-09-28T06:18:38Z) - ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns [57.176642106425895]
We introduce a garment representation model that addresses limitations of current approaches.
It is faster and yields higher quality reconstructions than purely implicit surface representations.
It supports rapid editing of garment shapes and texture by modifying individual 2D panels.
arXiv Detail & Related papers (2023-05-23T14:23:48Z) - DrapeNet: Garment Generation and Self-Supervised Draping [95.0315186890655]
We rely on self-supervision to train a single network to drape multiple garments.
This is achieved by predicting a 3D deformation field conditioned on the latent codes of a generative network.
Our pipeline can generate and drape previously unseen garments of any topology.
arXiv Detail & Related papers (2022-11-21T09:13:53Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - 3D Magic Mirror: Clothing Reconstruction from a Single Image via a
Causal Perspective [96.65476492200648]
This research aims to study a self-supervised 3D clothing reconstruction method.
It recovers the geometry shape, and texture of human clothing from a single 2D image.
arXiv Detail & Related papers (2022-04-27T17:46:55Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.