Estimation of 3D Body Shape and Clothing Measurements from Frontal- and
Side-view Images
- URL: http://arxiv.org/abs/2205.14347v1
- Date: Sat, 28 May 2022 06:10:41 GMT
- Title: Estimation of 3D Body Shape and Clothing Measurements from Frontal- and
Side-view Images
- Authors: Kundan Sai Prabhu Thota, Sungho Suh, Bo Zhou, Paul Lukowicz
- Abstract summary: estimation of 3D human body shape and clothing measurements is crucial for virtual try-on and size recommendation problems in the fashion industry.
Existing works proposed various solutions to these problems but could not succeed in the industry adaptation because of complexity and restrictions.
We propose a simple yet effective architecture to estimate both shape and measures from frontal- and side-view images.
- Score: 8.107762252448195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The estimation of 3D human body shape and clothing measurements is crucial
for virtual try-on and size recommendation problems in the fashion industry but
has always been a challenging problem due to several conditions, such as lack
of publicly available realistic datasets, ambiguity in multiple camera
resolutions, and the undefinable human shape space. Existing works proposed
various solutions to these problems but could not succeed in the industry
adaptation because of complexity and restrictions. To solve the complexity and
challenges, in this paper, we propose a simple yet effective architecture to
estimate both shape and measures from frontal- and side-view images. We utilize
silhouette segmentation from the two multi-view images and implement an
auto-encoder network to learn low-dimensional features from segmented
silhouettes. Then, we adopt a kernel-based regularized regression module to
estimate the body shape and measurements. The experimental results show that
the proposed method provides competitive results on the synthetic dataset,
NOMO-3d-400-scans Dataset, and RGB Images of humans captured in different
cameras.
Related papers
- A Simple Strategy for Body Estimation from Partial-View Images [8.05538560322898]
Virtual try-on and product personalization have become increasingly important in modern online shopping, highlighting the need for accurate body measurement estimation.
Previous research has advanced in estimating 3D body shapes from RGB images, but the task is inherently ambiguous as the observed scale of human subjects in the images depends on two unknown factors: capture distance and body dimensions.
We propose a modular and simple height normalization solution, which relocates the subject skeleton to the desired position, normalizing the scale and disentangling the relationship between the two variables.
arXiv Detail & Related papers (2024-04-14T16:55:23Z) - Towards Robust and Expressive Whole-body Human Pose and Shape Estimation [51.457517178632756]
Whole-body pose and shape estimation aims to jointly predict different behaviors of the entire human body from a monocular image.
Existing methods often exhibit degraded performance under the complexity of in-the-wild scenarios.
We propose a novel framework to enhance the robustness of whole-body pose and shape estimation.
arXiv Detail & Related papers (2023-12-14T08:17:42Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - 3D Human Pose, Shape and Texture from Low-Resolution Images and Videos [107.36352212367179]
We propose RSC-Net, which consists of a Resolution-aware network, a Self-supervision loss, and a Contrastive learning scheme.
The proposed method is able to learn 3D body pose and shape across different resolutions with one single model.
We extend the RSC-Net to handle low-resolution videos and apply it to reconstruct textured 3D pedestrians from low-resolution input.
arXiv Detail & Related papers (2021-03-11T06:52:12Z) - SelfPose: 3D Egocentric Pose Estimation from a Headset Mounted Camera [97.0162841635425]
We present a solution to egocentric 3D body pose estimation from monocular images captured from downward looking fish-eye cameras installed on the rim of a head mounted VR device.
This unusual viewpoint leads to images with unique visual appearance, with severe self-occlusions and perspective distortions.
We propose an encoder-decoder architecture with a novel multi-branch decoder designed to account for the varying uncertainty in 2D predictions.
arXiv Detail & Related papers (2020-11-02T16:18:06Z) - Synthetic Training for Accurate 3D Human Pose and Shape Estimation in
the Wild [27.14060158187953]
This paper addresses the problem of monocular 3D human shape and pose estimation from an RGB image.
We propose STRAPS, a system that uses proxy representations, such as silhouettes and 2D joints, as inputs to a shape and pose regression neural network.
We show that STRAPS outperforms other state-of-the-art methods on SSP-3D in terms of shape prediction accuracy.
arXiv Detail & Related papers (2020-09-21T16:39:04Z) - 3D Human Shape and Pose from a Single Low-Resolution Image with
Self-Supervised Learning [105.49950571267715]
Existing deep learning methods for 3D human shape and pose estimation rely on relatively high-resolution input images.
We propose RSC-Net, which consists of a Resolution-aware network, a Self-supervision loss, and a Contrastive learning scheme.
We show that both these new training losses provide robustness when learning 3D shape and pose in a weakly-supervised manner.
arXiv Detail & Related papers (2020-07-27T16:19:52Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.