Learning Anthropometry from Rendered Humans
- URL: http://arxiv.org/abs/2101.02515v1
- Date: Thu, 7 Jan 2021 12:26:39 GMT
- Title: Learning Anthropometry from Rendered Humans
- Authors: Song Yan and Joni-Kristian K\"am\"ar\"ainen
- Abstract summary: We introduce a new 3D scan dataset of 2,675 female and 1,474 male scans.
We also introduce a small dataset of 200 RGB images and tape measured ground truth.
With the help of the two new datasets we propose a part-based shape model and a deep neural network for estimating anthropometric measurements from 2D images.
- Score: 6.939794498223168
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate estimation of anthropometric body measurements from RGB images has
many potential applications in industrial design, online clothing, medical
diagnosis and ergonomics. Research on this topic is limited by the fact that
there exist only generated datasets which are based on fitting a 3D body mesh
to 3D body scans in the commercial CAESAR dataset. For 2D only silhouettes are
generated. To circumvent the data bottleneck, we introduce a new 3D scan
dataset of 2,675 female and 1,474 male scans. We also introduce a small dataset
of 200 RGB images and tape measured ground truth. With the help of the two new
datasets we propose a part-based shape model and a deep neural network for
estimating anthropometric measurements from 2D images. All data will be made
publicly available.
Related papers
- FAMOUS: High-Fidelity Monocular 3D Human Digitization Using View Synthesis [51.193297565630886]
The challenge of accurately inferring texture remains, particularly in obscured areas such as the back of a person in frontal-view images.
This limitation in texture prediction largely stems from the scarcity of large-scale and diverse 3D datasets.
We propose leveraging extensive 2D fashion datasets to enhance both texture and shape prediction in 3D human digitization.
arXiv Detail & Related papers (2024-10-13T01:25:05Z) - SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes [62.82552328188602]
We present SCULPT, a novel 3D generative model for clothed and textured 3D meshes of humans.
We devise a deep neural network that learns to represent the geometry and appearance distribution of clothed human bodies.
arXiv Detail & Related papers (2023-08-21T11:23:25Z) - Accurate 3D Body Shape Regression using Metric and Semantic Attributes [55.58629009876271]
We show that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
This is the first demonstration that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
arXiv Detail & Related papers (2022-06-14T17:54:49Z) - SHARP: Shape-Aware Reconstruction of People in Loose Clothing [6.469298908778292]
SHARP (SHape Aware Reconstruction of People in loose clothing) is a novel end-to-end trainable network.
It recovers the 3D geometry and appearance of humans in loose clothing from a monocular image.
We show superior qualitative and quantitative performance than existing state-of-the-art methods.
arXiv Detail & Related papers (2022-05-24T10:26:42Z) - MedMNIST v2: A Large-Scale Lightweight Benchmark for 2D and 3D
Biomedical Image Classification [59.10015984688104]
MedMNIST v2 is a large-scale MNIST-like dataset collection of standardized biomedical images.
The resulting dataset consists of 708,069 2D images and 10,214 3D images in total.
arXiv Detail & Related papers (2021-10-27T22:02:04Z) - A Neural Anthropometer Learning from Body Dimensions Computed on Human
3D Meshes [0.0]
We present a method to calculate right and left arm length, shoulder width, and inseam (crotch height) from 3D meshes with focus on potential medical, virtual try-on and distance tailoring applications.
On the other hand, we use four additional body dimensions calculated using recently published methods to assemble a set of eight body dimensions which we use as a supervision signal to our Neural Anthropometer: a convolutional neural network capable of estimating these dimensions.
arXiv Detail & Related papers (2021-10-06T12:56:05Z) - SHARP: Shape-Aware Reconstruction of People In Loose Clothing [6.796748304066826]
3D human body reconstruction from monocular images is an interesting and ill-posed problem in computer vision.
We propose SHARP, a novel end-to-end trainable network that accurately recovers the detailed geometry and appearance of 3D people in loose clothing from a monocular image.
We evaluate SHARP on publicly available Cloth3D and THuman datasets and report superior performance to state-of-the-art approaches.
arXiv Detail & Related papers (2021-06-09T02:54:53Z) - Neural 3D Clothes Retargeting from a Single Image [91.5030622330039]
We present a method of clothes; generating the potential poses and deformations of a given 3D clothing template model to fit onto a person in a single RGB image.
The problem is fundamentally ill-posed as attaining the ground truth data is impossible, i.e. images of people wearing the different 3D clothing template model model at exact same pose.
We propose a semi-supervised learning framework that validates the physical plausibility of 3D deformation by matching with the prescribed body-to-cloth contact points and clothing to fit onto the unlabeled silhouette.
arXiv Detail & Related papers (2021-01-29T20:50:34Z) - Cascaded deep monocular 3D human pose estimation with evolutionary
training data [76.3478675752847]
Deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation.
This paper proposes a novel data augmentation method that is scalable for massive amount of training data.
Our method synthesizes unseen 3D human skeletons based on a hierarchical human representation and synthesizings inspired by prior knowledge.
arXiv Detail & Related papers (2020-06-14T03:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.