SelfNeRF: Fast Training NeRF for Human from Monocular Self-rotating
Video
- URL: http://arxiv.org/abs/2210.01651v1
- Date: Tue, 4 Oct 2022 14:54:40 GMT
- Title: SelfNeRF: Fast Training NeRF for Human from Monocular Self-rotating
Video
- Authors: Bo Peng, Jun Hu, Jingtao Zhou, Juyong Zhang
- Abstract summary: SelfNeRF is an efficient neural radiance field based novel view synthesis method for human performance.
It can train from scratch and achieve high-fidelity results in about twenty minutes.
- Score: 29.50059002228373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose SelfNeRF, an efficient neural radiance field based
novel view synthesis method for human performance. Given monocular
self-rotating videos of human performers, SelfNeRF can train from scratch and
achieve high-fidelity results in about twenty minutes. Some recent works have
utilized the neural radiance field for dynamic human reconstruction. However,
most of these methods need multi-view inputs and require hours of training,
making it still difficult for practical use. To address this challenging
problem, we introduce a surface-relative representation based on
multi-resolution hash encoding that can greatly improve the training speed and
aggregate inter-frame information. Extensive experimental results on several
different datasets demonstrate the effectiveness and efficiency of SelfNeRF to
challenging monocular videos.
Related papers
- Efficient Neural Implicit Representation for 3D Human Reconstruction [38.241511336562844]
Conventional methods for reconstructing 3D human motion frequently require the use of expensive hardware and have high processing costs.
This study presents HumanAvatar, an innovative approach that efficiently reconstructs precise human avatars from monocular video sources.
arXiv Detail & Related papers (2024-10-23T10:16:01Z) - GHuNeRF: Generalizable Human NeRF from a Monocular Video [63.741714198481354]
GHuNeRF learns a generalizable human NeRF model from a monocular video.
We validate our approach on the widely-used ZJU-MoCap dataset.
arXiv Detail & Related papers (2023-08-31T09:19:06Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - Real-time volumetric rendering of dynamic humans [83.08068677139822]
We present a method for fast 3D reconstruction and real-time rendering of dynamic humans from monocular videos.
Our method can reconstruct a dynamic human in less than 3h using a single GPU, compared to recent state-of-the-art alternatives that take up to 72h.
A novel local ray marching rendering allows visualizing the neural human on a mobile VR device at 40 frames per second with minimal loss of visual quality.
arXiv Detail & Related papers (2023-03-21T14:41:25Z) - Learning Neural Volumetric Representations of Dynamic Humans in Minutes [49.10057060558854]
We propose a novel method for learning neural volumetric videos of dynamic humans from sparse view videos in minutes with competitive visual quality.
Specifically, we define a novel part-based voxelized human representation to better distribute the representational power of the network to different human parts.
Experiments demonstrate that our model can be learned 100 times faster than prior per-scene optimization methods.
arXiv Detail & Related papers (2023-02-23T18:57:01Z) - EfficientNeRF: Efficient Neural Radiance Fields [63.76830521051605]
We present EfficientNeRF as an efficient NeRF-based method to represent 3D scene and synthesize novel-view images.
Our method can reduce over 88% of training time, reach rendering speed of over 200 FPS, while still achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-02T05:36:44Z) - R2L: Distilling Neural Radiance Field to Neural Light Field for
Efficient Novel View Synthesis [76.07010495581535]
Rendering a single pixel requires querying the Neural Radiance Field network hundreds of times.
NeLF presents a more straightforward representation over NeRF in novel view.
We show the key to successfully learning a deep NeLF network is to have sufficient data.
arXiv Detail & Related papers (2022-03-31T17:57:05Z) - HumanNeRF: Generalizable Neural Human Radiance Field from Sparse Inputs [35.77939325296057]
Recent neural human representations can produce high-quality multi-view rendering but require using dense multi-view inputs and costly training.
We present HumanNeRF - a generalizable neural representation - for high-fidelity free-view synthesis of dynamic humans.
arXiv Detail & Related papers (2021-12-06T05:22:09Z) - Neural Human Performer: Learning Generalizable Radiance Fields for Human
Performance Rendering [34.80975358673563]
We propose a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.
Experiments on the ZJU-MoCap and AIST datasets show that our method significantly outperforms recent generalizable NeRF methods on unseen identities and poses.
arXiv Detail & Related papers (2021-09-15T17:32:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.