NeuralHumanFVV: Real-Time Neural Volumetric Human Performance Rendering
using RGB Cameras
- URL: http://arxiv.org/abs/2103.07700v1
- Date: Sat, 13 Mar 2021 12:03:38 GMT
- Title: NeuralHumanFVV: Real-Time Neural Volumetric Human Performance Rendering
using RGB Cameras
- Authors: Xin Suo and Yuheng Jiang and Pei Lin and Yingliang Zhang and Kaiwen
Guo and Minye Wu and Lan Xu
- Abstract summary: 4D reconstruction and rendering of human activities is critical for immersive VR/AR experience.
Recent advances still fail to recover fine geometry and texture results with the level of detail present in the input images from sparse multi-view RGB cameras.
We propose a real-time neural human performance capture and rendering system to generate both high-quality geometry and photo-realistic texture of human activities in arbitrary novel views.
- Score: 17.18904717379273
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 4D reconstruction and rendering of human activities is critical for immersive
VR/AR experience.Recent advances still fail to recover fine geometry and
texture results with the level of detail present in the input images from
sparse multi-view RGB cameras. In this paper, we propose NeuralHumanFVV, a
real-time neural human performance capture and rendering system to generate
both high-quality geometry and photo-realistic texture of human activities in
arbitrary novel views. We propose a neural geometry generation scheme with a
hierarchical sampling strategy for real-time implicit geometry inference, as
well as a novel neural blending scheme to generate high resolution (e.g., 1k)
and photo-realistic texture results in the novel views. Furthermore, we adopt
neural normal blending to enhance geometry details and formulate our neural
geometry and texture rendering into a multi-task learning framework. Extensive
experiments demonstrate the effectiveness of our approach to achieve
high-quality geometry and photo-realistic free view-point reconstruction for
challenging human performances.
Related papers
- AniSDF: Fused-Granularity Neural Surfaces with Anisotropic Encoding for High-Fidelity 3D Reconstruction [55.69271635843385]
We present AniSDF, a novel approach that learns fused-granularity neural surfaces with physics-based encoding for high-fidelity 3D reconstruction.
Our method boosts the quality of SDF-based methods by a great scale in both geometry reconstruction and novel-view synthesis.
arXiv Detail & Related papers (2024-10-02T03:10:38Z) - ANIM: Accurate Neural Implicit Model for Human Reconstruction from a single RGB-D image [40.03212588672639]
ANIM is a novel method that reconstructs arbitrary 3D human shapes from single-view RGB-D images with an unprecedented level of accuracy.
Our model learns geometric details from both pixel-aligned and voxel-aligned features to leverage depth information.
Experiments demonstrate that ANIM outperforms state-of-the-art works that use RGB, surface normals, point cloud or RGB-D data as input.
arXiv Detail & Related papers (2024-03-15T14:45:38Z) - GM-NeRF: Learning Generalizable Model-based Neural Radiance Fields from
Multi-view Images [79.39247661907397]
We introduce an effective framework Generalizable Model-based Neural Radiance Fields to synthesize free-viewpoint images.
Specifically, we propose a geometry-guided attention mechanism to register the appearance code from multi-view 2D images to a geometry proxy.
arXiv Detail & Related papers (2023-03-24T03:32:02Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - Generalizable Neural Performer: Learning Robust Radiance Fields for
Human Novel View Synthesis [52.720314035084215]
This work targets at using a general deep learning framework to synthesize free-viewpoint images of arbitrary human performers.
We present a simple yet powerful framework, named Generalizable Neural Performer (GNR), that learns a generalizable and robust neural body representation.
Experiments on GeneBody-1.0 and ZJU-Mocap show better robustness of our methods than recent state-of-the-art generalizable methods.
arXiv Detail & Related papers (2022-04-25T17:14:22Z) - HDhuman: High-quality Human Novel-view Rendering from Sparse Views [15.810495442598963]
We propose HDhuman, which uses a human reconstruction network with a pixel-aligned spatial transformer and a rendering network with geometry-guided pixel-wise feature integration.
Our approach outperforms all the prior generic or specific methods on both synthetic data and real-world data.
arXiv Detail & Related papers (2022-01-20T13:04:59Z) - NeRS: Neural Reflectance Surfaces for Sparse-view 3D Reconstruction in
the Wild [80.09093712055682]
We introduce a surface analog of implicit models called Neural Reflectance Surfaces (NeRS)
NeRS learns a neural shape representation of a closed surface that is diffeomorphic to a sphere, guaranteeing water-tight reconstructions.
We demonstrate that surface-based neural reconstructions enable learning from such data, outperforming volumetric neural rendering-based reconstructions.
arXiv Detail & Related papers (2021-10-14T17:59:58Z) - Neural Free-Viewpoint Performance Rendering under Complex Human-object
Interactions [35.41116017268475]
4D reconstruction of human-object interaction is critical for immersive VR/AR experience and human activity understanding.
Recent advances still fail to recover fine geometry and texture results from sparse RGB inputs, especially under challenging human-object interactions scenarios.
We propose a neural human performance capture and rendering system to generate both high-quality geometry and photo-realistic texture of both human and objects.
arXiv Detail & Related papers (2021-08-01T04:53:54Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Multiview Neural Surface Reconstruction by Disentangling Geometry and
Appearance [46.488713939892136]
We introduce a neural network that simultaneously learns the unknown geometry, camera parameters, and a neural architecture that approximates the light reflected from the surface towards the camera.
We trained our network on real world 2D images of objects with different material properties, lighting conditions, and noisy camera materials from the DTU MVS dataset.
arXiv Detail & Related papers (2020-03-22T10:20:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.