Human Performance Modeling and Rendering via Neural Animated Mesh
- URL: http://arxiv.org/abs/2209.08468v1
- Date: Sun, 18 Sep 2022 03:58:00 GMT
- Title: Human Performance Modeling and Rendering via Neural Animated Mesh
- Authors: Fuqiang Zhao, Yuheng Jiang, Kaixin Yao, Jiakai Zhang, Liao Wang,
Haizhao Dai, Yuhui Zhong, Yingliang Zhang, Minye Wu, Lan Xu, Jingyi Yu
- Abstract summary: We bridge the traditional mesh with a new class of neural rendering.
In this paper, we present a novel approach for rendering human views from video.
We demonstrate our approach on various platforms, inserting virtual human performances into AR headsets.
- Score: 40.25449482006199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We have recently seen tremendous progress in the neural advances for
photo-real human modeling and rendering. However, it's still challenging to
integrate them into an existing mesh-based pipeline for downstream
applications. In this paper, we present a comprehensive neural approach for
high-quality reconstruction, compression, and rendering of human performances
from dense multi-view videos. Our core intuition is to bridge the traditional
animated mesh workflow with a new class of highly efficient neural techniques.
We first introduce a neural surface reconstructor for high-quality surface
generation in minutes. It marries the implicit volumetric rendering of the
truncated signed distance field (TSDF) with multi-resolution hash encoding. We
further propose a hybrid neural tracker to generate animated meshes, which
combines explicit non-rigid tracking with implicit dynamic deformation in a
self-supervised framework. The former provides the coarse warping back into the
canonical space, while the latter implicit one further predicts the
displacements using the 4D hash encoding as in our reconstructor. Then, we
discuss the rendering schemes using the obtained animated meshes, ranging from
dynamic texturing to lumigraph rendering under various bandwidth settings. To
strike an intricate balance between quality and bandwidth, we propose a
hierarchical solution by first rendering 6 virtual views covering the performer
and then conducting occlusion-aware neural texture blending. We demonstrate the
efficacy of our approach in a variety of mesh-based applications and
photo-realistic free-view experiences on various platforms, i.e., inserting
virtual human performances into real environments through mobile AR or
immersively watching talent shows with VR headsets.
Related papers
- D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - HR Human: Modeling Human Avatars with Triangular Mesh and High-Resolution Textures from Videos [52.23323966700072]
We present a framework for acquiring human avatars that are attached with high-resolution physically-based material textures and mesh from monocular video.
Our method introduces a novel information fusion strategy to combine the information from the monocular video and synthesize virtual multi-view images.
Experiments show that our approach outperforms previous representations in terms of high fidelity, and this explicit result supports deployment on common triangulars.
arXiv Detail & Related papers (2024-05-18T11:49:09Z) - FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic Faces [21.946327323788275]
3D rendering of dynamic face is a challenging problem.
We present a novel representation that enables high-quality rendering of an actor's dynamic facial performances.
arXiv Detail & Related papers (2024-04-22T00:44:13Z) - High-Quality Mesh Blendshape Generation from Face Videos via Neural Inverse Rendering [15.009484906668737]
We introduce a novel technique that reconstructs mesh-based blendshape rigs from single or sparse multi-view videos.
Experiments demonstrate that, with the flexible input of single or sparse multi-view videos, we reconstruct personalized high-fidelity blendshapes.
arXiv Detail & Related papers (2024-01-16T14:41:31Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur [68.24599239479326]
We develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images.
Our model surpasses state-of-the-art point-based methods for novel view synthesis.
arXiv Detail & Related papers (2023-04-25T08:36:33Z) - Efficient Meshy Neural Fields for Animatable Human Avatars [87.68529918184494]
Efficiently digitizing high-fidelity animatable human avatars from videos is a challenging and active research topic.
Recent rendering-based neural representations open a new way for human digitization with their friendly usability and photo-varying reconstruction quality.
We present EMA, a method that Efficiently learns Meshy neural fields to reconstruct animatable human Avatars.
arXiv Detail & Related papers (2023-03-23T00:15:34Z) - Learning Compositional Radiance Fields of Dynamic Human Heads [13.272666180264485]
We propose a novel compositional 3D representation that combines the best of previous methods to produce both higher-resolution and faster results.
Differentiable volume rendering is employed to compute photo-realistic novel views of the human head and upper body.
Our approach achieves state-of-the-art results for synthesizing novel views of dynamic human heads and the upper body.
arXiv Detail & Related papers (2020-12-17T22:19:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.