A Neural Anthropometer Learning from Body Dimensions Computed on Human
3D Meshes
- URL: http://arxiv.org/abs/2110.04064v1
- Date: Wed, 6 Oct 2021 12:56:05 GMT
- Title: A Neural Anthropometer Learning from Body Dimensions Computed on Human
3D Meshes
- Authors: Yansel Gonz\'alez Tejeda and Helmut A. Mayer
- Abstract summary: We present a method to calculate right and left arm length, shoulder width, and inseam (crotch height) from 3D meshes with focus on potential medical, virtual try-on and distance tailoring applications.
On the other hand, we use four additional body dimensions calculated using recently published methods to assemble a set of eight body dimensions which we use as a supervision signal to our Neural Anthropometer: a convolutional neural network capable of estimating these dimensions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human shape estimation has become increasingly important both theoretically
and practically, for instance, in 3D mesh estimation, distance garment
production and computational forensics, to mention just a few examples. As a
further specialization, \emph{Human Body Dimensions Estimation} (HBDE) focuses
on estimating human body measurements like shoulder width or chest
circumference from images or 3D meshes usually using supervised learning
approaches. The main obstacle in this context is the data scarcity problem, as
collecting this ground truth requires expensive and difficult procedures. This
obstacle can be overcome by obtaining realistic human measurements from 3D
human meshes. However, a) there are no well established methods to calculate
HBDs from 3D meshes and b) there are no benchmarks to fairly compare results on
the HBDE task. Our contribution is twofold. On the one hand, we present a
method to calculate right and left arm length, shoulder width, and inseam
(crotch height) from 3D meshes with focus on potential medical, virtual try-on
and distance tailoring applications. On the other hand, we use four additional
body dimensions calculated using recently published methods to assemble a set
of eight body dimensions which we use as a supervision signal to our Neural
Anthropometer: a convolutional neural network capable of estimating these
dimensions. To assess the estimation, we train the Neural Anthropometer with
synthetic images of 3D meshes, from which we calculated the HBDs and observed
that the network's overall mean estimate error is $20.89$ mm (relative error of
2.84\%). The results we present are fully reproducible and establish a fair
baseline for research on the task of HBDE, therefore enabling the community
with a valuable method.
Related papers
- Neural Localizer Fields for Continuous 3D Human Pose and Shape Estimation [32.30055363306321]
We propose a paradigm for seamlessly unifying different human pose and shape-related tasks and datasets.
Our formulation is centered on the ability - both at training and test time - to query any arbitrary point of the human volume.
We can naturally exploit differently annotated data sources including mesh, 2D/3D skeleton and dense pose, without having to convert between them.
arXiv Detail & Related papers (2024-07-10T10:44:18Z) - Binarized 3D Whole-body Human Mesh Recovery [104.13364878565737]
We propose a Binarized Dual Residual Network (BiDRN) to estimate the 3D human body, face, and hands parameters efficiently.
BiDRN achieves comparable performance with full-precision method Hand4Whole while using just 22.1% parameters and 14.8% operations.
arXiv Detail & Related papers (2023-11-24T07:51:50Z) - Zolly: Zoom Focal Length Correctly for Perspective-Distorted Human Mesh
Reconstruction [66.10717041384625]
Zolly is the first 3DHMR method focusing on perspective-distorted images.
We propose a new camera model and a novel 2D representation, termed distortion image, which describes the 2D dense distortion scale of the human body.
We extend two real-world datasets tailored for this task, all containing perspective-distorted human images.
arXiv Detail & Related papers (2023-03-24T04:22:41Z) - Effect of Gender, Pose and Camera Distance on Human Body Dimensions
Estimation [0.0]
Human Body Dimensions Estimation (HBDE) is a task that an intelligent agent can perform to attempt to determine human body information from images (2D) or point clouds or meshes (3D)
We train and evaluate the CNN in four scenarios: (1) training with subjects of a specific gender, (2) in a specific pose, (3) sparse camera distance and (4) dense camera distance.
Not only our experiments demonstrate that the network can perform the task successfully, but also reveal a number of relevant facts that contribute to better understand the task of HBDE.
arXiv Detail & Related papers (2022-05-24T12:26:25Z) - PONet: Robust 3D Human Pose Estimation via Learning Orientations Only [116.1502793612437]
We propose a novel Pose Orientation Net (PONet) that is able to robustly estimate 3D pose by learning orientations only.
PONet estimates the 3D orientation of these limbs by taking advantage of the local image evidence to recover the 3D pose.
We evaluate our method on multiple datasets, including Human3.6M, MPII, MPI-INF-3DHP, and 3DPW.
arXiv Detail & Related papers (2021-12-21T12:48:48Z) - Weakly-supervised Cross-view 3D Human Pose Estimation [16.045255544594625]
We propose a simple yet effective pipeline for weakly-supervised cross-view 3D human pose estimation.
Our method can achieve state-of-the-art performance in a weakly-supervised manner.
We evaluate our method on the standard benchmark dataset, Human3.6M.
arXiv Detail & Related papers (2021-05-23T08:16:25Z) - 3D Human Body Reshaping with Anthropometric Modeling [59.51820187982793]
Reshaping accurate and realistic 3D human bodies from anthropometric parameters poses a fundamental challenge for person identification, online shopping and virtual reality.
Existing approaches for creating such 3D shapes often suffer from complex measurement by range cameras or high-end scanners.
This paper proposes a novel feature-selection-based local mapping technique, which enables automatic anthropometric parameter modeling for each body facet.
arXiv Detail & Related papers (2021-04-05T04:09:39Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - MeTRAbs: Metric-Scale Truncation-Robust Heatmaps for Absolute 3D Human
Pose Estimation [16.463390330757132]
We propose metric-scale truncation-robust (MeTRo) volumetric heatmaps, whose dimensions are all defined in metric 3D space, instead of being aligned with image space.
This reinterpretation of heatmap dimensions allows us to directly estimate complete, metric-scale poses without test-time knowledge of distance or relying on anthropometrics, such as bone lengths.
We find that supervision via absolute pose loss is crucial for accurate non-root-relative localization.
arXiv Detail & Related papers (2020-07-12T11:52:09Z) - HEMlets PoSh: Learning Part-Centric Heatmap Triplets for 3D Human Pose
and Shape Estimation [60.35776484235304]
This work attempts to address the uncertainty of lifting the detected 2D joints to the 3D space by introducing an intermediate state-Part-Centric Heatmap Triplets (HEMlets)
The HEMlets utilize three joint-heatmaps to represent the relative depth information of the end-joints for each skeletal body part.
A Convolutional Network (ConvNet) is first trained to predict HEMlets from the input image, followed by a volumetric joint-heatmap regression.
arXiv Detail & Related papers (2020-03-10T04:03:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.