Strata-NeRF : Neural Radiance Fields for Stratified Scenes
- URL: http://arxiv.org/abs/2308.10337v1
- Date: Sun, 20 Aug 2023 18:45:43 GMT
- Title: Strata-NeRF : Neural Radiance Fields for Stratified Scenes
- Authors: Ankit Dhiman, Srinath R, Harsh Rangwani, Rishubh Parihar, Lokesh R
Boregowda, Srinath Sridhar and R Venkatesh Babu
- Abstract summary: In the real world, we may capture a scene at multiple levels, resulting in a layered capture.
We propose Strata-NeRF, a single neural radiance field that implicitly captures a scene with multiple levels.
We find that Strata-NeRF effectively captures stratified scenes, minimizes artifacts, and synthesizes high-fidelity views.
- Score: 29.58305675148781
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Field (NeRF) approaches learn the underlying 3D
representation of a scene and generate photo-realistic novel views with high
fidelity. However, most proposed settings concentrate on modelling a single
object or a single level of a scene. However, in the real world, we may capture
a scene at multiple levels, resulting in a layered capture. For example,
tourists usually capture a monument's exterior structure before capturing the
inner structure. Modelling such scenes in 3D with seamless switching between
levels can drastically improve immersive experiences. However, most existing
techniques struggle in modelling such scenes. We propose Strata-NeRF, a single
neural radiance field that implicitly captures a scene with multiple levels.
Strata-NeRF achieves this by conditioning the NeRFs on Vector Quantized (VQ)
latent representations which allow sudden changes in scene structure. We
evaluate the effectiveness of our approach in multi-layered synthetic dataset
comprising diverse scenes and then further validate its generalization on the
real-world RealEstate10K dataset. We find that Strata-NeRF effectively captures
stratified scenes, minimizes artifacts, and synthesizes high-fidelity views
compared to existing approaches.
Related papers
- SCARF: Scalable Continual Learning Framework for Memory-efficient Multiple Neural Radiance Fields [9.606992888590757]
We build on Neural Radiance Fields (NeRF), which uses multi-layer perceptron to model the density and radiance field of a scene as the implicit function.
We propose an uncertain surface knowledge distillation strategy to transfer the radiance field knowledge of previous scenes to the new model.
Experiments show that the proposed approach achieves state-of-the-art rendering quality of continual learning NeRF on NeRF-Synthetic, LLFF, and TanksAndTemples datasets.
arXiv Detail & Related papers (2024-09-06T03:36:12Z) - DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Features [65.8738034806085]
DistillNeRF is a self-supervised learning framework for understanding 3D environments in autonomous driving scenes.
Our method is a generalizable feedforward model that predicts a rich neural scene representation from sparse, single-frame multi-view camera inputs.
arXiv Detail & Related papers (2024-06-17T21:15:13Z) - NeRF On-the-go: Exploiting Uncertainty for Distractor-free NeRFs in the Wild [55.154625718222995]
We introduce NeRF On-the-go, a simple yet effective approach that enables the robust synthesis of novel views in complex, in-the-wild scenes.
Our method demonstrates a significant improvement over state-of-the-art techniques.
arXiv Detail & Related papers (2024-05-29T02:53:40Z) - SceneRF: Self-Supervised Monocular 3D Scene Reconstruction with Radiance
Fields [19.740018132105757]
SceneRF is a self-supervised monocular scene reconstruction method using only posed image sequences for training.
At inference, a single input image suffices to hallucinate novel depth views which are fused together to obtain 3D scene reconstruction.
arXiv Detail & Related papers (2022-12-05T18:59:57Z) - ViewNeRF: Unsupervised Viewpoint Estimation Using Category-Level Neural
Radiance Fields [35.89557494372891]
We introduce ViewNeRF, a Neural Radiance Field-based viewpoint estimation method.
Our method uses an analysis by synthesis approach, combining a conditional NeRF with a viewpoint predictor and a scene encoder.
Our model shows competitive results on synthetic and real datasets.
arXiv Detail & Related papers (2022-12-01T11:16:11Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale
Scene Rendering [145.95688637309746]
We introduce BungeeNeRF, a progressive neural radiance field that achieves level-of-detail rendering across drastically varied scales.
We demonstrate the superiority of BungeeNeRF in modeling diverse multi-scale scenes with drastically varying views on multiple data sources.
arXiv Detail & Related papers (2021-12-10T13:16:21Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.