Efficient Neural Radiance Fields with Learned Depth-Guided Sampling
- URL: http://arxiv.org/abs/2112.01517v2
- Date: Mon, 6 Dec 2021 09:36:12 GMT
- Title: Efficient Neural Radiance Fields with Learned Depth-Guided Sampling
- Authors: Haotong Lin, Sida Peng, Zhen Xu, Hujun Bao, Xiaowei Zhou
- Abstract summary: We present a hybrid scene representation which combines the best of implicit radiance fields and explicit depth maps for efficient rendering.
Experiments show that the proposed approach exhibits state-of-the-art performance on the DTU, Real Forward-facing and NeRF Synthetic datasets.
We also demonstrate the capability of our method to synthesize free-viewpoint videos of dynamic human performers in real-time.
- Score: 43.79307270743013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper aims to reduce the rendering time of generalizable radiance
fields. Some recent works equip neural radiance fields with image encoders and
are able to generalize across scenes, which avoids the per-scene optimization.
However, their rendering process is generally very slow. A major factor is that
they sample lots of points in empty space when inferring radiance fields. In
this paper, we present a hybrid scene representation which combines the best of
implicit radiance fields and explicit depth maps for efficient rendering.
Specifically, we first build the cascade cost volume to efficiently predict the
coarse geometry of the scene. The coarse geometry allows us to sample few
points near the scene surface and significantly improves the rendering speed.
This process is fully differentiable, enabling us to jointly learn the depth
prediction and radiance field networks from only RGB images. Experiments show
that the proposed approach exhibits state-of-the-art performance on the DTU,
Real Forward-facing and NeRF Synthetic datasets, while being at least 50 times
faster than previous generalizable radiance field methods. We also demonstrate
the capability of our method to synthesize free-viewpoint videos of dynamic
human performers in real-time. The code will be available at
https://zju3dv.github.io/enerf/.
Related papers
- RayGauss: Volumetric Gaussian-Based Ray Casting for Photorealistic Novel View Synthesis [3.4341938551046227]
Differentiable rendering methods made significant progress in novel view synthesis.
We provide a consistent formulation of the emitted radiance c and density sigma for differentiable ray casting of irregularly distributed Gaussians.
We achieve superior quality rendering compared to the state-of-the-art while maintaining reasonable training times and achieving inference speeds of 25 FPS on the Blender dataset.
arXiv Detail & Related papers (2024-08-06T10:59:58Z) - Simple-RF: Regularizing Sparse Input Radiance Fields with Simpler Solutions [5.699788926464751]
Neural Radiance Fields (NeRF) show impressive performance in photo-realistic free-view rendering of scenes.
Recent improvements on the NeRF such as TensoRF and ZipNeRF employ explicit models for faster optimization and rendering.
We show that supervising the depth estimated by a radiance field helps train it effectively with fewer views.
arXiv Detail & Related papers (2024-04-29T18:00:25Z) - MERF: Memory-Efficient Radiance Fields for Real-time View Synthesis in
Unbounded Scenes [61.01853377661283]
We present a Memory-Efficient Radiance Field representation that achieves real-time rendering of large-scale scenes in a browser.
We introduce a novel contraction function that maps scene coordinates into a bounded volume while still allowing for efficient ray-box intersection.
arXiv Detail & Related papers (2023-02-23T18:59:07Z) - PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene
Reconstruction from Blurry Images [75.87721926918874]
We present Progressively Deblurring Radiance Field (PDRF)
PDRF is a novel approach to efficiently reconstruct high quality radiance fields from blurry images.
We show that PDRF is 15X faster than previous State-of-The-Art scene reconstruction methods.
arXiv Detail & Related papers (2022-08-17T03:42:29Z) - Cascaded and Generalizable Neural Radiance Fields for Fast View
Synthesis [35.035125537722514]
We present CG-NeRF, a cascade and generalizable neural radiance fields method for view synthesis.
We first train CG-NeRF on multiple 3D scenes of the DTU dataset.
We show that CG-NeRF outperforms state-of-the-art generalizable neural rendering methods on various synthetic and real datasets.
arXiv Detail & Related papers (2022-08-09T12:23:48Z) - RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from
Sparse Inputs [79.00855490550367]
We show that NeRF can produce photorealistic renderings of unseen viewpoints when many input views are available.
We address this by regularizing the geometry and appearance of patches rendered from unobserved viewpoints.
Our model outperforms not only other methods that optimize over a single scene, but also conditional models that are extensively pre-trained on large multi-view datasets.
arXiv Detail & Related papers (2021-12-01T18:59:46Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.