Efficient Meshy Neural Fields for Animatable Human Avatars
- URL: http://arxiv.org/abs/2303.12965v1
- Date: Thu, 23 Mar 2023 00:15:34 GMT
- Title: Efficient Meshy Neural Fields for Animatable Human Avatars
- Authors: Xiaoke Huang, Yiji Cheng, Yansong Tang, Xiu Li, Jie Zhou, Jiwen Lu
- Abstract summary: Efficiently digitizing high-fidelity animatable human avatars from videos is a challenging and active research topic.
Recent rendering-based neural representations open a new way for human digitization with their friendly usability and photo-varying reconstruction quality.
We present EMA, a method that Efficiently learns Meshy neural fields to reconstruct animatable human Avatars.
- Score: 87.68529918184494
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficiently digitizing high-fidelity animatable human avatars from videos is
a challenging and active research topic. Recent volume rendering-based neural
representations open a new way for human digitization with their friendly
usability and photo-realistic reconstruction quality. However, they are
inefficient for long optimization times and slow inference speed; their
implicit nature results in entangled geometry, materials, and dynamics of
humans, which are hard to edit afterward. Such drawbacks prevent their direct
applicability to downstream applications, especially the prominent
rasterization-based graphic ones. We present EMA, a method that Efficiently
learns Meshy neural fields to reconstruct animatable human Avatars. It jointly
optimizes explicit triangular canonical mesh, spatial-varying material, and
motion dynamics, via inverse rendering in an end-to-end fashion. Each above
component is derived from separate neural fields, relaxing the requirement of a
template, or rigging. The mesh representation is highly compatible with the
efficient rasterization-based renderer, thus our method only takes about an
hour of training and can render in real-time. Moreover, only minutes of
optimization is enough for plausible reconstruction results. The
disentanglement of meshes enables direct downstream applications. Extensive
experiments illustrate the very competitive performance and significant speed
boost against previous methods. We also showcase applications including novel
pose synthesis, material editing, and relighting. The project page:
https://xk-huang.github.io/ema/.
Related papers
- Surfel-based Gaussian Inverse Rendering for Fast and Relightable Dynamic Human Reconstruction from Monocular Video [41.677560631206184]
This paper introduces the Surfel-based Gaussian Inverse Avatar (SGIA) method, which introduces efficient training and rendering for relightable dynamic human reconstruction.
SGIA advances previous Gaussian Avatar methods by comprehensively modeling Physically-Based Rendering (PBR) properties for clothed human avatars.
Our approach integrates pre-integration and image-based lighting for fast light calculations that surpass the performance of existing implicit-based techniques.
arXiv Detail & Related papers (2024-07-21T16:34:03Z) - Interactive Rendering of Relightable and Animatable Gaussian Avatars [37.73483372890271]
We propose a simple and efficient method to decouple body materials and lighting from multi-view or monocular avatar videos.
Our method can render higher quality results at a faster speed on both synthetic and real datasets.
arXiv Detail & Related papers (2024-07-15T13:25:07Z) - ASH: Animatable Gaussian Splats for Efficient and Photoreal Human Rendering [62.81677824868519]
We propose an animatable Gaussian splatting approach for photorealistic rendering of dynamic humans in real-time.
We parameterize the clothed human as animatable 3D Gaussians, which can be efficiently splatted into image space to generate the final rendering.
We benchmark ASH with competing methods on pose-controllable avatars, demonstrating that our method outperforms existing real-time methods by a large margin and shows comparable or even better results than offline methods.
arXiv Detail & Related papers (2023-12-10T17:07:37Z) - Animatable 3D Gaussian: Fast and High-Quality Reconstruction of Multiple Human Avatars [18.55354901614876]
We propose Animatable 3D Gaussian, which learns human avatars from input images and poses.
On both novel view synthesis and novel pose synthesis tasks, our method achieves higher reconstruction quality than InstantAvatar with less training time.
Our method can be easily extended to multi-human scenes and achieve comparable novel view synthesis results on a scene with ten people in only 25 seconds of training.
arXiv Detail & Related papers (2023-11-27T08:17:09Z) - FLARE: Fast Learning of Animatable and Relightable Mesh Avatars [64.48254296523977]
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems.
We introduce FLARE, a technique that enables the creation of animatable and relightable avatars from a single monocular video.
arXiv Detail & Related papers (2023-10-26T16:13:00Z) - Real-time volumetric rendering of dynamic humans [83.08068677139822]
We present a method for fast 3D reconstruction and real-time rendering of dynamic humans from monocular videos.
Our method can reconstruct a dynamic human in less than 3h using a single GPU, compared to recent state-of-the-art alternatives that take up to 72h.
A novel local ray marching rendering allows visualizing the neural human on a mobile VR device at 40 frames per second with minimal loss of visual quality.
arXiv Detail & Related papers (2023-03-21T14:41:25Z) - Learning Neural Volumetric Representations of Dynamic Humans in Minutes [49.10057060558854]
We propose a novel method for learning neural volumetric videos of dynamic humans from sparse view videos in minutes with competitive visual quality.
Specifically, we define a novel part-based voxelized human representation to better distribute the representational power of the network to different human parts.
Experiments demonstrate that our model can be learned 100 times faster than prior per-scene optimization methods.
arXiv Detail & Related papers (2023-02-23T18:57:01Z) - Human Performance Modeling and Rendering via Neural Animated Mesh [40.25449482006199]
We bridge the traditional mesh with a new class of neural rendering.
In this paper, we present a novel approach for rendering human views from video.
We demonstrate our approach on various platforms, inserting virtual human performances into AR headsets.
arXiv Detail & Related papers (2022-09-18T03:58:00Z) - KiloNeuS: Implicit Neural Representations with Real-Time Global
Illumination [1.5749416770494706]
We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates.
KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes.
arXiv Detail & Related papers (2022-06-22T07:33:26Z) - HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars [65.82222842213577]
We propose a novel neural rendering pipeline, which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality.
First, we learn to encode articulated human motions on a dense UV manifold of the human body surface.
We then leverage the encoded information on the UV manifold to construct a 3D volumetric representation.
arXiv Detail & Related papers (2021-12-19T17:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.