StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural
Hints
- URL: http://arxiv.org/abs/2209.05277v1
- Date: Mon, 12 Sep 2022 14:33:27 GMT
- Title: StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural
Hints
- Authors: Zheng Chen, Chen Wang, Yuan-Chen Guo, Song-Hai Zhang
- Abstract summary: StructNeRF is a solution to novel view synthesis for indoor scenes with sparse inputs.
Our method improves both the geometry and the view synthesis performance of NeRF without any additional training on external data.
- Score: 23.15914545835831
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRF) achieve photo-realistic view synthesis with
densely captured input images. However, the geometry of NeRF is extremely
under-constrained given sparse views, resulting in significant degradation of
novel view synthesis quality. Inspired by self-supervised depth estimation
methods, we propose StructNeRF, a solution to novel view synthesis for indoor
scenes with sparse inputs. StructNeRF leverages the structural hints naturally
embedded in multi-view inputs to handle the unconstrained geometry issue in
NeRF. Specifically, it tackles the texture and non-texture regions
respectively: a patch-based multi-view consistent photometric loss is proposed
to constrain the geometry of textured regions; for non-textured ones, we
explicitly restrict them to be 3D consistent planes. Through the dense
self-supervised depth constraints, our method improves both the geometry and
the view synthesis performance of NeRF without any additional training on
external data. Extensive experiments on several real-world datasets demonstrate
that StructNeRF surpasses state-of-the-art methods for indoor scenes with
sparse inputs both quantitatively and qualitatively.
Related papers
- SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance [106.0057551634008]
FreeNeRF attempts to overcome this limitation by integrating implicit geometry regularization.
New study introduces a novel feature matching based sparse geometry regularization module.
module excels in pinpointing high-frequency keypoints, thereby safeguarding the integrity of fine details.
arXiv Detail & Related papers (2024-04-01T08:37:57Z) - DaRF: Boosting Radiance Fields from Sparse Inputs with Monocular Depth
Adaptation [31.655818586634258]
We propose a novel framework, dubbed D"aRF, that achieves robust NeRF reconstruction with a handful of real-world images.
Our framework imposes the MDE network's powerful geometry prior to NeRF representation at both seen and unseen viewpoints.
In addition, we overcome the ambiguity problems of monocular depths through patch-wise scale-shift fitting and geometry distillation.
arXiv Detail & Related papers (2023-05-30T16:46:41Z) - NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds [60.1382112938132]
We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room.
NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views.
arXiv Detail & Related papers (2023-04-13T06:40:08Z) - Improving Neural Radiance Fields with Depth-aware Optimization for Novel
View Synthesis [12.3338393483795]
We propose SfMNeRF, a method to better synthesize novel views as well as reconstruct the 3D-scene geometry.
SfMNeRF employs the epipolar, photometric consistency, depth smoothness, and position-of-matches constraints to explicitly reconstruct the 3D-scene structure.
Experiments on two public datasets demonstrate that SfMNeRF surpasses state-of-the-art approaches.
arXiv Detail & Related papers (2023-04-11T13:37:17Z) - SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates [16.344734292989504]
SCADE is a novel technique that improves NeRF reconstruction quality on sparse, unconstrained input views.
We propose a new method that learns to predict, for each view, a continuous, multimodal distribution of depth estimates.
Experiments show that our approach enables higher fidelity novel view synthesis from sparse views.
arXiv Detail & Related papers (2023-03-23T18:00:07Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - GARF:Geometry-Aware Generalized Neural Radiance Field [47.76524984421343]
We propose Geometry-Aware Generalized Neural Radiance Field (GARF) with a geometry-aware dynamic sampling (GADS) strategy.
Our framework infers the unseen scenes on both pixel-scale and geometry-scale with only a few input images.
Experiments on indoor and outdoor datasets show that GARF reduces samples by more than 25%, while improving rendering quality and 3D geometry estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - NeRF++: Analyzing and Improving Neural Radiance Fields [117.73411181186088]
Neural Radiance Fields (NeRF) achieve impressive view synthesis results for a variety of capture settings.
NeRF fits multi-layer perceptrons representing view-invariant opacity and view-dependent color volumes to a set of training images.
We address a parametrization issue involved in applying NeRF to 360 captures of objects within large-scale, 3D scenes.
arXiv Detail & Related papers (2020-10-15T03:24:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.