ARCH: Animatable Reconstruction of Clothed Humans
- URL: http://arxiv.org/abs/2004.04572v2
- Date: Fri, 10 Apr 2020 19:14:39 GMT
- Title: ARCH: Animatable Reconstruction of Clothed Humans
- Authors: Zeng Huang, Yuanlu Xu, Christoph Lassner, Hao Li, Tony Tung
- Abstract summary: ARCH (Animatable Reconstruction of Clothed Humans) is an end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image.
ARCH is a learned pose-aware model that produces detailed 3D rigged full-body human avatars from a single unconstrained RGB image.
- Score: 27.849315613277724
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose ARCH (Animatable Reconstruction of Clothed Humans),
a novel end-to-end framework for accurate reconstruction of animation-ready 3D
clothed humans from a monocular image. Existing approaches to digitize 3D
humans struggle to handle pose variations and recover details. Also, they do
not produce models that are animation ready. In contrast, ARCH is a learned
pose-aware model that produces detailed 3D rigged full-body human avatars from
a single unconstrained RGB image. A Semantic Space and a Semantic Deformation
Field are created using a parametric 3D body estimator. They allow the
transformation of 2D/3D clothed humans into a canonical space, reducing
ambiguities in geometry caused by pose variations and occlusions in training
data. Detailed surface geometry and appearance are learned using an implicit
function representation with spatial local features. Furthermore, we propose
additional per-pixel supervision on the 3D reconstruction using opacity-aware
differentiable rendering. Our experiments indicate that ARCH increases the
fidelity of the reconstructed humans. We obtain more than 50% lower
reconstruction errors for standard metrics compared to state-of-the-art methods
on public datasets. We also show numerous qualitative examples of animated,
high-quality reconstructed avatars unseen in the literature so far.
Related papers
- SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion [35.73448283467723]
SiTH is a novel pipeline that integrates an image-conditioned diffusion model into a 3D mesh reconstruction workflow.
We employ a powerful generative diffusion model to hallucinate unseen back-view appearance based on the input images.
For the latter, we leverage skinned body meshes as guidance to recover full-body texture meshes from the input and back-view images.
arXiv Detail & Related papers (2023-11-27T14:22:07Z) - AvatarGen: A 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is an unsupervised generation of 3D-aware clothed humans with various appearances and controllable geometries.
Our method can generate animatable 3D human avatars with high-quality appearance and geometry modeling.
It is competent for many applications, e.g., single-view reconstruction, re-animation, and text-guided synthesis/editing.
arXiv Detail & Related papers (2022-11-26T15:15:45Z) - Realistic, Animatable Human Reconstructions for Virtual Fit-On [0.7649716717097428]
We present an end-to-end virtual try-on pipeline, that can fit different clothes on a personalized 3-D human model.
Our main idea is to construct an animatable 3-D human model and try-on different clothes in a 3-D virtual environment.
arXiv Detail & Related papers (2022-10-16T13:36:24Z) - AvatarGen: a 3D Generative Model for Animatable Human Avatars [108.11137221845352]
AvatarGen is the first method that enables not only non-rigid human generation with diverse appearance but also full control over poses and viewpoints.
To model non-rigid dynamics, it introduces a deformation network to learn pose-dependent deformations in the canonical space.
Our method can generate animatable human avatars with high-quality appearance and geometry modeling, significantly outperforming previous 3D GANs.
arXiv Detail & Related papers (2022-08-01T01:27:02Z) - SHARP: Shape-Aware Reconstruction of People in Loose Clothing [6.469298908778292]
SHARP (SHape Aware Reconstruction of People in loose clothing) is a novel end-to-end trainable network.
It recovers the 3D geometry and appearance of humans in loose clothing from a monocular image.
We show superior qualitative and quantitative performance than existing state-of-the-art methods.
arXiv Detail & Related papers (2022-05-24T10:26:42Z) - MVP-Human Dataset for 3D Human Avatar Reconstruction from Unconstrained
Frames [59.37430649840777]
We present 3D Avatar Reconstruction in the wild (ARwild), which first reconstructs the implicit skinning fields in a multi-level manner.
We contribute a large-scale dataset, MVP-Human, which contains 400 subjects, each of which has 15 scans in different poses.
Overall, benefits from the specific network architecture and the diverse data, the trained model enables 3D avatar reconstruction from unconstrained frames.
arXiv Detail & Related papers (2022-04-24T03:57:59Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - ARCH++: Animation-Ready Clothed Human Reconstruction Revisited [82.83445332309238]
We present ARCH++, an image-based method to reconstruct 3D avatars with arbitrary clothing styles.
Our reconstructed avatars are animation-ready and highly realistic, in both the visible regions from input views and the unseen regions.
arXiv Detail & Related papers (2021-08-17T19:27:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.