BCNet: Learning Body and Cloth Shape from A Single Image
- URL: http://arxiv.org/abs/2004.00214v2
- Date: Mon, 3 Aug 2020 10:03:24 GMT
- Title: BCNet: Learning Body and Cloth Shape from A Single Image
- Authors: Boyi Jiang, Juyong Zhang, Yang Hong, Jinhao Luo, Ligang Liu and Hujun
Bao
- Abstract summary: We propose a layered garment representation on top of SMPL and novelly make the skinning weight of garment independent of the body mesh.
Compared with existing methods, our method can support more garment categories and recover more accurate geometry.
- Score: 56.486796244320125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the problem to automatically reconstruct garment
and body shapes from a single near-front view RGB image. To this end, we
propose a layered garment representation on top of SMPL and novelly make the
skinning weight of garment independent of the body mesh, which significantly
improves the expression ability of our garment model. Compared with existing
methods, our method can support more garment categories and recover more
accurate geometry. To train our model, we construct two large scale datasets
with ground truth body and garment geometries as well as paired color images.
Compared with single mesh or non-parametric representation, our method can
achieve more flexible control with separate meshes, makes applications like
re-pose, garment transfer, and garment texture mapping possible. Code and some
data is available at https://github.com/jby1993/BCNet.
Related papers
- SPnet: Estimating Garment Sewing Patterns from a Single Image [10.604555099281173]
This paper presents a novel method for reconstructing 3D garment models from a single image of a posed user.
By inferring the fundamental shape of the garment through sewing patterns from a single image, we can generate 3D garments that can adaptively deform to arbitrary poses.
arXiv Detail & Related papers (2023-12-26T09:51:25Z) - DrapeNet: Garment Generation and Self-Supervised Draping [95.0315186890655]
We rely on self-supervision to train a single network to drape multiple garments.
This is achieved by predicting a 3D deformation field conditioned on the latent codes of a generative network.
Our pipeline can generate and drape previously unseen garments of any topology.
arXiv Detail & Related papers (2022-11-21T09:13:53Z) - Neural Point-based Shape Modeling of Humans in Challenging Clothing [75.75870953766935]
Parametric 3D body models like SMPL only represent minimally-clothed people and are hard to extend to clothing.
We extend point-based methods with a coarse stage, that replaces canonicalization with a learned pose-independent "coarse shape"
The approach works well for garments that both conform to, and deviate from, the body.
arXiv Detail & Related papers (2022-09-14T17:59:17Z) - Clothes-Changing Person Re-identification with RGB Modality Only [102.44387094119165]
We propose a Clothes-based Adrial Loss (CAL) to mine clothes-irrelevant features from the original RGB images.
Videos contain richer appearance and additional temporal information, which can be used to model propertemporal patterns.
arXiv Detail & Related papers (2022-04-14T11:38:28Z) - Arbitrary Virtual Try-On Network: Characteristics Preservation and
Trade-off between Body and Clothing [85.74977256940855]
We propose an Arbitrary Virtual Try-On Network (AVTON) for all-type clothes.
AVTON can synthesize realistic try-on images by preserving and trading off characteristics of the target clothes and the reference person.
Our approach can achieve better performance compared with the state-of-the-art virtual try-on methods.
arXiv Detail & Related papers (2021-11-24T08:59:56Z) - SMPLicit: Topology-aware Generative Model for Clothed People [65.84665248796615]
We introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry.
In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people.
arXiv Detail & Related papers (2021-03-11T18:57:03Z) - DeepCloth: Neural Garment Representation for Shape and Style Editing [37.595804908189855]
We introduce a novel method, termed as DeepCloth, to establish a unified garment representation framework.
Our key idea is to represent garment geometry by a "UV-position map with mask"
We learn a continuous feature space mapped from the above UV space, enabling garment shape editing and transition.
arXiv Detail & Related papers (2020-11-30T08:42:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.