Neural Sparse Voxel Fields
- URL: http://arxiv.org/abs/2007.11571v2
- Date: Wed, 6 Jan 2021 21:04:24 GMT
- Title: Neural Sparse Voxel Fields
- Authors: Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, Christian
Theobalt
- Abstract summary: We introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering.
NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell.
Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving higher quality results.
- Score: 151.20366604586403
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photo-realistic free-viewpoint rendering of real-world scenes using classical
computer graphics techniques is challenging, because it requires the difficult
step of capturing detailed appearance and geometry models. Recent studies have
demonstrated promising results by learning scene representations that
implicitly encode both geometry and appearance without 3D supervision. However,
existing approaches in practice often show blurry renderings caused by the
limited network capacity or the difficulty in finding accurate intersections of
camera rays with the scene geometry. Synthesizing high-resolution imagery from
these representations often requires time-consuming optical ray marching. In
this work, we introduce Neural Sparse Voxel Fields (NSVF), a new neural scene
representation for fast and high-quality free-viewpoint rendering. NSVF defines
a set of voxel-bounded implicit fields organized in a sparse voxel octree to
model local properties in each cell. We progressively learn the underlying
voxel structures with a differentiable ray-marching operation from only a set
of posed RGB images. With the sparse voxel octree structure, rendering novel
views can be accelerated by skipping the voxels containing no relevant scene
content. Our method is typically over 10 times faster than the state-of-the-art
(namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving
higher quality results. Furthermore, by utilizing an explicit sparse voxel
representation, our method can easily be applied to scene editing and scene
composition. We also demonstrate several challenging tasks, including
multi-scene learning, free-viewpoint rendering of a moving human, and
large-scale scene rendering. Code and data are available at our website:
https://github.com/facebookresearch/NSVF.
Related papers
- Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - Neural Rays for Occlusion-aware Image-based Rendering [108.34004858785896]
We present a new neural representation, called Neural Ray (NeuRay), for the novel view synthesis (NVS) task with multi-view images as input.
NeuRay can quickly generate high-quality novel view rendering images of unseen scenes with little finetuning.
arXiv Detail & Related papers (2021-07-28T15:09:40Z) - Baking Neural Radiance Fields for Real-Time View Synthesis [41.07052395570522]
We present a method to train a NeRF, then precompute and store (i.e. "bake") it as a novel representation called a Sparse Neural Radiance Grid (SNeRG)
The resulting scene representation retains NeRF's ability to render fine geometric details and view-dependent appearance, is compact, and can be rendered in real-time.
arXiv Detail & Related papers (2021-03-26T17:59:52Z) - Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video [76.19076002661157]
Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
arXiv Detail & Related papers (2020-12-22T18:46:12Z) - pixelNeRF: Neural Radiance Fields from One or Few Images [20.607712035278315]
pixelNeRF is a learning framework that predicts a continuous neural scene representation conditioned on one or few input images.
We conduct experiments on ShapeNet benchmarks for single image novel view synthesis tasks with held-out objects.
In all cases, pixelNeRF outperforms current state-of-the-art baselines for novel view synthesis and single image 3D reconstruction.
arXiv Detail & Related papers (2020-12-03T18:59:54Z) - D-NeRF: Neural Radiance Fields for Dynamic Scenes [72.75686949608624]
We introduce D-NeRF, a method that extends neural radiance fields to a dynamic domain.
D-NeRF reconstructs images of objects under rigid and non-rigid motions from a camera moving around the scene.
We demonstrate the effectiveness of our approach on scenes with objects under rigid, articulated and non-rigid motions.
arXiv Detail & Related papers (2020-11-27T19:06:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.