Interactive Sketching of Mannequin Poses
- URL: http://arxiv.org/abs/2212.07098v1
- Date: Wed, 14 Dec 2022 08:45:51 GMT
- Title: Interactive Sketching of Mannequin Poses
- Authors: Gizem Unlu, Mohamed Sayed, Gabriel Brostow
- Abstract summary: 3D body poses are necessary for various downstream applications.
We propose a machine-learning model for inferring the 3D pose of a CG mannequin from sketches of humans drawn in a cylinder-person style.
Our unique approach to vector graphics training data underpins our integrated ML-and-kinematics system.
- Score: 3.222802562733787
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: It can be easy and even fun to sketch humans in different poses. In contrast,
creating those same poses on a 3D graphics "mannequin" is comparatively
tedious. Yet 3D body poses are necessary for various downstream applications.
We seek to preserve the convenience of 2D sketching while giving users of
different skill levels the flexibility to accurately and more quickly
pose\slash refine a 3D mannequin.
At the core of the interactive system, we propose a machine-learning model
for inferring the 3D pose of a CG mannequin from sketches of humans drawn in a
cylinder-person style. Training such a model is challenging because of artist
variability, a lack of sketch training data with corresponding ground truth 3D
poses, and the high dimensionality of human pose-space. Our unique approach to
synthesizing vector graphics training data underpins our integrated
ML-and-kinematics system. We validate the system by tightly coupling it with a
user interface, and by performing a user study, in addition to quantitative
comparisons.
Related papers
- SketchBodyNet: A Sketch-Driven Multi-faceted Decoder Network for 3D
Human Reconstruction [18.443079472919635]
We propose a sketch-driven multi-faceted decoder network termed SketchBodyNet to address this task.
Our network achieves superior performance in reconstructing 3D human meshes from freehand sketches.
arXiv Detail & Related papers (2023-10-10T12:38:34Z) - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - Embodied Hands: Modeling and Capturing Hands and Bodies Together [61.32931890166915]
Humans move their hands and bodies together to communicate and solve tasks.
Most methods treat the 3D modeling and tracking of bodies and hands separately.
We formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences.
arXiv Detail & Related papers (2022-01-07T18:59:32Z) - Human Performance Capture from Monocular Video in the Wild [50.34917313325813]
We propose a method capable of capturing the dynamic 3D human shape from a monocular video featuring challenging body poses.
Our method outperforms state-of-the-art methods on an in-the-wild human video dataset 3DPW.
arXiv Detail & Related papers (2021-11-29T16:32:41Z) - SCANimate: Weakly Supervised Learning of Skinned Clothed Avatar Networks [54.94737477860082]
We present an end-to-end trainable framework that takes raw 3D scans of a clothed human and turns them into an animatable avatar.
SCANimate does not rely on a customized mesh template or surface mesh registration.
Our method can be applied to pose-aware appearance modeling to generate a fully textured avatar.
arXiv Detail & Related papers (2021-04-07T17:59:58Z) - Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis [72.34794624243281]
We propose a self-supervised learning framework to disentangle variations from unlabeled video frames.
Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, allows us to operate on videos with diverse camera movements.
arXiv Detail & Related papers (2020-04-09T07:55:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.