StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural
Hints
- URL: http://arxiv.org/abs/2209.05277v1
- Date: Mon, 12 Sep 2022 14:33:27 GMT
- Title: StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural
Hints
- Authors: Zheng Chen, Chen Wang, Yuan-Chen Guo, Song-Hai Zhang
- Abstract summary: StructNeRF is a solution to novel view synthesis for indoor scenes with sparse inputs.
Our method improves both the geometry and the view synthesis performance of NeRF without any additional training on external data.
- Score: 23.15914545835831
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRF) achieve photo-realistic view synthesis with
densely captured input images. However, the geometry of NeRF is extremely
under-constrained given sparse views, resulting in significant degradation of
novel view synthesis quality. Inspired by self-supervised depth estimation
methods, we propose StructNeRF, a solution to novel view synthesis for indoor
scenes with sparse inputs. StructNeRF leverages the structural hints naturally
embedded in multi-view inputs to handle the unconstrained geometry issue in
NeRF. Specifically, it tackles the texture and non-texture regions
respectively: a patch-based multi-view consistent photometric loss is proposed
to constrain the geometry of textured regions; for non-textured ones, we
explicitly restrict them to be 3D consistent planes. Through the dense
self-supervised depth constraints, our method improves both the geometry and
the view synthesis performance of NeRF without any additional training on
external data. Extensive experiments on several real-world datasets demonstrate
that StructNeRF surpasses state-of-the-art methods for indoor scenes with
sparse inputs both quantitatively and qualitatively.
Related papers
- NeRF-Texture: Synthesizing Neural Radiance Field Textures [77.24205024987414]
We propose a novel texture synthesis method with Neural Radiance Fields (NeRF) to capture and synthesize textures from given multi-view images.
In the proposed NeRF texture representation, a scene with fine geometric details is disentangled into the meso-structure textures and the underlying base shape.
We can synthesize NeRF-based textures through patch matching of latent features.
arXiv Detail & Related papers (2024-12-13T09:41:48Z) - SGCNeRF: Few-Shot Neural Rendering via Sparse Geometric Consistency Guidance [106.0057551634008]
FreeNeRF attempts to overcome this limitation by integrating implicit geometry regularization.
New study introduces a novel feature matching based sparse geometry regularization module.
module excels in pinpointing high-frequency keypoints, thereby safeguarding the integrity of fine details.
arXiv Detail & Related papers (2024-04-01T08:37:57Z) - NeRFVS: Neural Radiance Fields for Free View Synthesis via Geometry
Scaffolds [60.1382112938132]
We present NeRFVS, a novel neural radiance fields (NeRF) based method to enable free navigation in a room.
NeRF achieves impressive performance in rendering images for novel views similar to the input views while suffering for novel views that are significantly different from the training views.
arXiv Detail & Related papers (2023-04-13T06:40:08Z) - Improving Neural Radiance Fields with Depth-aware Optimization for Novel
View Synthesis [12.3338393483795]
We propose SfMNeRF, a method to better synthesize novel views as well as reconstruct the 3D-scene geometry.
SfMNeRF employs the epipolar, photometric consistency, depth smoothness, and position-of-matches constraints to explicitly reconstruct the 3D-scene structure.
Experiments on two public datasets demonstrate that SfMNeRF surpasses state-of-the-art approaches.
arXiv Detail & Related papers (2023-04-11T13:37:17Z) - SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates [16.344734292989504]
SCADE is a novel technique that improves NeRF reconstruction quality on sparse, unconstrained input views.
We propose a new method that learns to predict, for each view, a continuous, multimodal distribution of depth estimates.
Experiments show that our approach enables higher fidelity novel view synthesis from sparse views.
arXiv Detail & Related papers (2023-03-23T18:00:07Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.
Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.
Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.