Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing
- URL: http://arxiv.org/abs/2309.16189v1
- Date: Thu, 28 Sep 2023 06:18:38 GMT
- Title: Cloth2Body: Generating 3D Human Body Mesh from 2D Clothing
- Authors: Lu Dai, Liqian Ma, Shenhan Qian, Hao Liu, Ziwei Liu, Hui Xiong
- Abstract summary: Cloth2Body needs to address new and emerging challenges raised by the partial observation of the input and the high diversity of the output.
We propose an end-to-end framework that can accurately estimate 3D body mesh parameterized by pose and shape from a 2D clothing image.
As shown by experimental results, the proposed framework achieves state-of-the-art performance and can effectively recover natural and diverse 3D body meshes from 2D images.
- Score: 54.29207348918216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we define and study a new Cloth2Body problem which has a goal
of generating 3D human body meshes from a 2D clothing image. Unlike the
existing human mesh recovery problem, Cloth2Body needs to address new and
emerging challenges raised by the partial observation of the input and the high
diversity of the output. Indeed, there are three specific challenges. First,
how to locate and pose human bodies into the clothes. Second, how to
effectively estimate body shapes out of various clothing types. Finally, how to
generate diverse and plausible results from a 2D clothing image. To this end,
we propose an end-to-end framework that can accurately estimate 3D body mesh
parameterized by pose and shape from a 2D clothing image. Along this line, we
first utilize Kinematics-aware Pose Estimation to estimate body pose
parameters. 3D skeleton is employed as a proxy followed by an inverse
kinematics module to boost the estimation accuracy. We additionally design an
adaptive depth trick to align the re-projected 3D mesh better with 2D clothing
image by disentangling the effects of object size and camera extrinsic. Next,
we propose Physics-informed Shape Estimation to estimate body shape parameters.
3D shape parameters are predicted based on partial body measurements estimated
from RGB image, which not only improves pixel-wise human-cloth alignment, but
also enables flexible user editing. Finally, we design Evolution-based pose
generation method, a skeleton transplanting method inspired by genetic
algorithms to generate diverse reasonable poses during inference. As shown by
experimental results on both synthetic and real-world data, the proposed
framework achieves state-of-the-art performance and can effectively recover
natural and diverse 3D body meshes from 2D images that align well with
clothing.
Related papers
- Co-Evolution of Pose and Mesh for 3D Human Body Estimation from Video [23.93644678238666]
We propose a Pose and Mesh Co-Evolution network (PMCE) to recover 3D human motion from a video.
The proposed PMCE outperforms previous state-of-the-art methods in terms of both per-frame accuracy and temporal consistency.
arXiv Detail & Related papers (2023-08-20T16:03:21Z) - ISP: Multi-Layered Garment Draping with Implicit Sewing Patterns [57.176642106425895]
We introduce a garment representation model that addresses limitations of current approaches.
It is faster and yields higher quality reconstructions than purely implicit surface representations.
It supports rapid editing of garment shapes and texture by modifying individual 2D panels.
arXiv Detail & Related papers (2023-05-23T14:23:48Z) - Accurate 3D Body Shape Regression using Metric and Semantic Attributes [55.58629009876271]
We show that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
This is the first demonstration that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
arXiv Detail & Related papers (2022-06-14T17:54:49Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - HybrIK: A Hybrid Analytical-Neural Inverse Kinematics Solution for 3D
Human Pose and Shape Estimation [39.67289969828706]
We propose a novel hybrid inverse kinematics solution (HybrIK) to bridge the gap between body mesh estimation and 3D keypoint estimation.
HybrIK directly transforms accurate 3D joints to relative body-part rotations for 3D body mesh reconstruction.
We show that HybrIK preserves both the accuracy of 3D pose and the realistic body structure of the parametric human model.
arXiv Detail & Related papers (2020-11-30T10:32:30Z) - Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh
Recovery from a 2D Human Pose [70.23652933572647]
We propose a novel graph convolutional neural network (GraphCNN)-based system that estimates the 3D coordinates of human mesh vertices directly from the 2D human pose.
We show that our Pose2Mesh outperforms the previous 3D human pose and mesh estimation methods on various benchmark datasets.
arXiv Detail & Related papers (2020-08-20T16:01:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.