Neural 3D Clothes Retargeting from a Single Image
- URL: http://arxiv.org/abs/2102.00062v1
- Date: Fri, 29 Jan 2021 20:50:34 GMT
- Title: Neural 3D Clothes Retargeting from a Single Image
- Authors: Jae Shin Yoon, Kihwan Kim, Jan Kautz, and Hyun Soo Park
- Abstract summary: We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
- Score: 91.5030622330039
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a method of clothes retargeting; generating the
potential poses and deformations of a given 3D clothing template model to fit
onto a person in a single RGB image. The problem is fundamentally ill-posed as
attaining the ground truth data is impossible, i.e., images of people wearing
the different 3D clothing template model at exact same pose. We address this
challenge by utilizing large-scale synthetic data generated from physical
simulation, allowing us to map 2D dense body pose to 3D clothing deformation.
With the simulated data, we propose a semi-supervised learning framework that
validates the physical plausibility of the 3D deformation by matching with the
prescribed body-to-cloth contact points and clothing silhouette to fit onto the
unlabeled real images. A new neural clothes retargeting network (CRNet) is
designed to integrate the semi-supervised retargeting task in an end-to-end
fashion. In our evaluation, we show that our method can predict the realistic
3D pose and deformation field needed for retargeting clothes models in
real-world examples.
Related papers
- Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing [54.29207348918216]
Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
arXiv Detail & Related papers (2023-09-28T06:18:38Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - Realistic, Animatable Human Reconstructions for Virtual Fit-On [0.7649716717097428]
We present an end-to-end virtual try-on pipeline, that can fit different clothes on a personalized 3-D human model.
Our main idea is to construct an animatable 3-D human model and try-on different clothes in a 3-D virtual environment.
arXiv Detail & Related papers (2022-10-16T13:36:24Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - Deep Fashion3D: A Dataset and Benchmark for 3D Garment Reconstruction
from Single Images [50.34202789543989]
Deep Fashion3D is the largest collection to date of 3D garment models.
It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images.
A novel adaptable template is proposed to enable the learning of all types of clothing in a single network.
arXiv Detail & Related papers (2020-03-28T09:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.