3D Human Body Reshaping with Anthropometric Modeling
- URL: http://arxiv.org/abs/2104.01762v1
- Date: Mon, 5 Apr 2021 04:09:39 GMT
- Title: 3D Human Body Reshaping with Anthropometric Modeling
- Authors: Yanhong Zeng, Jianlong Fu, Hongyang Chao
- Abstract summary: Reshaping accurate and realistic 3D human bodies from anthropometric parameters poses a fundamental challenge for person identification, online shopping and virtual reality.
Existing approaches for creating such 3D shapes often suffer from complex measurement by range cameras or high-end scanners.
This paper proposes a novel feature-selection-based local mapping technique, which enables automatic anthropometric parameter modeling for each body facet.
- Score: 59.51820187982793
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Reshaping accurate and realistic 3D human bodies from anthropometric
parameters (e.g., height, chest size, etc.) poses a fundamental challenge for
person identification, online shopping and virtual reality. Existing approaches
for creating such 3D shapes often suffer from complex measurement by range
cameras or high-end scanners, which either involve heavy expense cost or result
in low quality. However, these high-quality equipments limit existing
approaches in real applications, because the equipments are not easily
accessible for common users. In this paper, we have designed a 3D human body
reshaping system by proposing a novel feature-selection-based local mapping
technique, which enables automatic anthropometric parameter modeling for each
body facet. Note that the proposed approach can leverage limited anthropometric
parameters (i.e., 3-5 measurements) as input, which avoids complex measurement,
and thus better user-friendly experience can be achieved in real scenarios.
Specifically, the proposed reshaping model consists of three steps. First, we
calculate full-body anthropometric parameters from limited user inputs by
imputation technique, and thus essential anthropometric parameters for 3D body
reshaping can be obtained. Second, we select the most relevant anthropometric
parameters for each facet by adopting relevance masks, which are learned
offline by the proposed local mapping technique. Third, we generate the 3D body
meshes by mapping matrices, which are learned by linear regression from the
selected parameters to mesh-based body representation. We conduct experiments
by anthropomorphic evaluation and a user study from 68 volunteers. Experiments
show the superior results of the proposed system in terms of mean
reconstruction error against the state-of-the-art approaches.
Related papers
- Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation [32.30055363306321]
We propose a paradigm for seamlessly unifying different human pose and shape-related tasks and datasets.
Our formulation is centered on the ability - both at training and test time - to query any arbitrary point of the human volume.
We can naturally exploit differently annotated data sources including mesh, 2D/3D skeleton and dense pose, without having to convert between them.
arXiv Detail & Related papers (2024-07-10T10:44:18Z) - Neural Capture of Animatable 3D Human from Monocular Video [38.974181971541846]
We present a novel paradigm of building an animatable 3D human representation from a monocular video input, such that it can be rendered in any unseen poses and views.
Our method is based on a dynamic Neural Radiance Field (NeRF) rigged by a mesh-based parametric 3D human model serving as a geometry proxy.
arXiv Detail & Related papers (2022-08-18T09:20:48Z) - Adversarial Parametric Pose Prior [106.12437086990853]
We learn a prior that restricts the SMPL parameters to values that produce realistic poses via adversarial training.
We show that our learned prior covers the diversity of the real-data distribution, facilitates optimization for 3D reconstruction from 2D keypoints, and yields better pose estimates when used for regression from images.
arXiv Detail & Related papers (2021-12-08T10:05:32Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - A Neural Anthropometer Learning from Body Dimensions Computed on Human
3D Meshes [0.0]
We present a method to calculate right and left arm length, shoulder width, and inseam (crotch height) from 3D meshes with focus on potential medical, virtual try-on and distance tailoring applications.
On the other hand, we use four additional body dimensions calculated using recently published methods to assemble a set of eight body dimensions which we use as a supervision signal to our Neural Anthropometer: a convolutional neural network capable of estimating these dimensions.
arXiv Detail & Related papers (2021-10-06T12:56:05Z) - 3D Multi-bodies: Fitting Sets of Plausible 3D Human Models to Ambiguous
Image Data [77.57798334776353]
We consider the problem of obtaining dense 3D reconstructions of humans from single and partially occluded views.
We suggest that ambiguities can be modelled more effectively by parametrizing the possible body shapes and poses.
We show that our method outperforms alternative approaches in ambiguous pose recovery on standard benchmarks for 3D humans.
arXiv Detail & Related papers (2020-11-02T13:55:31Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.