Learning Body-Aware 3D Shape Generative Models
- URL: http://arxiv.org/abs/2112.07022v1
- Date: Mon, 13 Dec 2021 21:19:55 GMT
- Title: Learning Body-Aware 3D Shape Generative Models
- Authors: Bryce Blinn, Alexander Ding, Daniel Ritchie, R. Kenny Jones, Srinath
Sridhar, Manolis Savva
- Abstract summary: Existing data-driven generative models of 3D shapes produce plausible objects.
In this paper, we learn body-aware generative models of 3D shapes.
- Score: 72.82563334734014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The shape of many objects in the built environment is dictated by their
relationships to the human body: how will a person interact with this object?
Existing data-driven generative models of 3D shapes produce plausible objects
but do not reason about the relationship of those objects to the human body. In
this paper, we learn body-aware generative models of 3D shapes. Specifically,
we train generative models of chairs, an ubiquitous shape category, which can
be conditioned on a given body shape or sitting pose. The
body-shape-conditioned models produce chairs which will be comfortable for a
person with the given body shape; the pose-conditioned models produce chairs
which accommodate the given sitting pose. To train these models, we define a
"sitting pose matching" metric and a novel "sitting comfort" metric.
Calculating these metrics requires an expensive optimization to sit the body
into the chair, which is too slow to be used as a loss function for training a
generative model. Thus, we train neural networks to efficiently approximate
these metrics. We use our approach to train three body-aware generative shape
models: a structured part-based generator, a point cloud generator, and an
implicit surface generator. In all cases, our approach produces models which
adapt their output chair shapes to input human body specifications.
Related papers
- Accurate 3D Body Shape Regression using Metric and Semantic Attributes [55.58629009876271]
We show that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
This is the first demonstration that 3D body shape regression from images can be trained from easy-to-obtain anthropometric measurements and linguistic shape attributes.
arXiv Detail & Related papers (2022-06-14T17:54:49Z) - COAP: Compositional Articulated Occupancy of People [28.234772596912162]
We present a novel neural implicit representation for articulated human bodies.
We employ a part-aware encoder-decoder architecture to learn neural articulated occupancy.
Our method largely outperforms existing solutions in terms of both efficiency and accuracy.
arXiv Detail & Related papers (2022-04-13T06:02:20Z) - LatentHuman: Shape-and-Pose Disentangled Latent Representation for Human
Bodies [78.17425779503047]
We propose a novel neural implicit representation for the human body.
It is fully differentiable and optimizable with disentangled shape and pose latent spaces.
Our model can be trained and fine-tuned directly on non-watertight raw data with well-designed losses.
arXiv Detail & Related papers (2021-11-30T04:10:57Z) - GRAB: A Dataset of Whole-Body Human Grasping of Objects [53.00728704389501]
Training computers to understand human grasping requires a rich dataset containing complex 3D object shapes, detailed contact information, hand pose and shape, and the 3D body motion over time.
We collect a new dataset, called GRAB, of whole-body grasps, containing full 3D shape and pose sequences of 10 subjects interacting with 51 everyday objects of varying shape and size.
This is a unique dataset, that goes well beyond existing ones for modeling and understanding how humans grasp and manipulate objects, how their full body is involved, and how interaction varies with the task.
arXiv Detail & Related papers (2020-08-25T17:57:55Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Unsupervised Shape and Pose Disentanglement for 3D Meshes [49.431680543840706]
We present a simple yet effective approach to learn disentangled shape and pose representations in an unsupervised setting.
We use a combination of self-consistency and cross-consistency constraints to learn pose and shape space from registered meshes.
We demonstrate the usefulness of learned representations through a number of tasks including pose transfer and shape retrieval.
arXiv Detail & Related papers (2020-07-22T11:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.