Few-shot Neural Human Performance Rendering from Sparse RGBD Videos
- URL: http://arxiv.org/abs/2107.06505v1
- Date: Wed, 14 Jul 2021 06:28:16 GMT
- Title: Few-shot Neural Human Performance Rendering from Sparse RGBD Videos
- Authors: Anqi Pang, Xin Chen, Haimin Luo, Minye Wu, Jingyi Yu, Lan Xu
- Abstract summary: Recent neural rendering approaches for human activities achieve remarkable view rendering results, but still rely on input views dense training.
We propose a fewshot neural rendering approach (FNHR) from only RGBD inputs to generate photoview free free viewpoint results.
- Score: 40.20382131461408
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent neural rendering approaches for human activities achieve remarkable
view synthesis results, but still rely on dense input views or dense training
with all the capture frames, leading to deployment difficulty and inefficient
training overload. However, existing advances will be ill-posed if the input is
both spatially and temporally sparse. To fill this gap, in this paper we
propose a few-shot neural human rendering approach (FNHR) from only sparse RGBD
inputs, which exploits the temporal and spatial redundancy to generate
photo-realistic free-view output of human activities. Our FNHR is trained only
on the key-frames which expand the motion manifold in the input sequences. We
introduce a two-branch neural blending to combine the neural point render and
classical graphics texturing pipeline, which integrates reliable observations
over sparse key-frames. Furthermore, we adopt a patch-based adversarial
training process to make use of the local redundancy and avoids over-fitting to
the key-frames, which generates fine-detailed rendering results. Extensive
experiments demonstrate the effectiveness of our approach to generate
high-quality free view-point results for challenging human performances under
the sparse setting.
Related papers
- Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis [53.702118455883095]
We propose a novel method for synthesizing novel views from sparse views with Gaussian Splatting.
Our key idea lies in exploring the self-supervisions inherent in the binocular stereo consistency between each pair of binocular images.
Our method significantly outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-10-24T15:10:27Z) - EG-HumanNeRF: Efficient Generalizable Human NeRF Utilizing Human Prior for Sparse View [2.11923215233494]
Generalizable neural radiance field (NeRF) enables neural-based digital human rendering without per-scene retraining.
We propose a generalizable human NeRF framework that achieves high-quality and real-time rendering with sparse input views.
arXiv Detail & Related papers (2024-10-16T05:08:00Z) - OccGaussian: 3D Gaussian Splatting for Occluded Human Rendering [55.50438181721271]
Previous method utilizing NeRF for surface rendering to recover the occluded areas requires more than one day to train and several seconds to render occluded areas.
We propose OccGaussian based on 3D Gaussian Splatting, which can be trained within 6 minutes and produces high-quality human renderings up to 160 FPS with occluded input.
arXiv Detail & Related papers (2024-04-12T13:00:06Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - Neural Adaptive SCEne Tracing [24.781844909539686]
We present NAScenT, the first neural rendering method based on directly training a hybrid explicit-implicit neural representation.
NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments.
arXiv Detail & Related papers (2022-02-28T10:27:23Z) - Geometry-Guided Progressive NeRF for Generalizable and Efficient Neural
Human Rendering [139.159534903657]
We develop a generalizable and efficient Neural Radiance Field (NeRF) pipeline for high-fidelity free-viewpoint human body details.
To better tackle self-occlusion, we devise a geometry-guided multi-view feature integration approach.
For achieving higher rendering efficiency, we introduce a geometry-guided progressive rendering pipeline.
arXiv Detail & Related papers (2021-12-08T14:42:10Z) - HumanNeRF: Generalizable Neural Human Radiance Field from Sparse Inputs [35.77939325296057]
Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training.
We present HumanNeRF - a generalizable neural representation - for high-fidelity free-view synthesis of dynamic humans.
arXiv Detail & Related papers (2021-12-06T05:22:09Z) - Neural Human Performer: Learning Generalizable Radiance Fields for Human
Performance Rendering [34.80975358673563]
We propose a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.
Experiments on the ZJU-MoCap and AIST datasets show that our method significantly outperforms recent generalizable NeRF methods on unseen identities and poses.
arXiv Detail & Related papers (2021-09-15T17:32:46Z) - NeRF in detail: Learning to sample for view synthesis [104.75126790300735]
Neural radiance fields (NeRF) methods have demonstrated impressive novel view synthesis.
In this work we address a clear limitation of the vanilla coarse-to-fine approach -- that it is based on a performance and not trained end-to-end for the task at hand.
We introduce a differentiable module that learns to propose samples and their importance for the fine network, and consider and compare multiple alternatives for its neural architecture.
arXiv Detail & Related papers (2021-06-09T17:59:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.