BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale
Scene Rendering
- URL: http://arxiv.org/abs/2112.05504v4
- Date: Tue, 9 May 2023 05:48:39 GMT
- Title: BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale
Scene Rendering
- Authors: Yuanbo Xiangli, Linning Xu, Xingang Pan, Nanxuan Zhao, Anyi Rao,
Christian Theobalt, Bo Dai, Dahua Lin
- Abstract summary: We introduce BungeeNeRF, a progressive neural radiance field that achieves level-of-detail rendering across drastically varied scales.
We demonstrate the superiority of BungeeNeRF in modeling diverse multi-scale scenes with drastically varying views on multiple data sources.
- Score: 145.95688637309746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRF) has achieved outstanding performance in
modeling 3D objects and controlled scenes, usually under a single scale. In
this work, we focus on multi-scale cases where large changes in imagery are
observed at drastically different scales. This scenario vastly exists in
real-world 3D environments, such as city scenes, with views ranging from
satellite level that captures the overview of a city, to ground level imagery
showing complex details of an architecture; and can also be commonly identified
in landscape and delicate minecraft 3D models. The wide span of viewing
positions within these scenes yields multi-scale renderings with very different
levels of detail, which poses great challenges to neural radiance field and
biases it towards compromised results. To address these issues, we introduce
BungeeNeRF, a progressive neural radiance field that achieves level-of-detail
rendering across drastically varied scales. Starting from fitting distant views
with a shallow base block, as training progresses, new blocks are appended to
accommodate the emerging details in the increasingly closer views. The strategy
progressively activates high-frequency channels in NeRF's positional encoding
inputs and successively unfolds more complex details as the training proceeds.
We demonstrate the superiority of BungeeNeRF in modeling diverse multi-scale
scenes with drastically varying views on multiple data sources (city models,
synthetic, and drone captured data) and its support for high-quality rendering
in different levels of detail.
Related papers
- Global-guided Focal Neural Radiance Field for Large-scale Scene Rendering [12.272724419136575]
We present a global-guided focal neural radiance field (GF-NeRF) that achieves high-fidelity rendering of large-scale scenes.
Our method achieves high-fidelity, natural rendering results on various types of large-scale datasets.
arXiv Detail & Related papers (2024-03-19T15:45:54Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - MuRF: Multi-Baseline Radiance Fields [117.55811938988256]
We present Multi-Baseline Radiance Fields (MuRF), a feed-forward approach to solving sparse view synthesis.
MuRF achieves state-of-the-art performance across multiple different baseline settings.
We also show promising zero-shot generalization abilities on the Mip-NeRF 360 dataset.
arXiv Detail & Related papers (2023-12-07T18:59:56Z) - Strata-NeRF : Neural Radiance Fields for Stratified Scenes [29.58305675148781]
In the real world, we may capture a scene at multiple levels, resulting in a layered capture.
We propose Strata-NeRF, a single neural radiance field that implicitly captures a scene with multiple levels.
We find that Strata-NeRF effectively captures stratified scenes, minimizes artifacts, and synthesizes high-fidelity views.
arXiv Detail & Related papers (2023-08-20T18:45:43Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.