Improving Geometric Consistency for 360-Degree Neural Radiance Fields in Indoor Scenarios
- URL: http://arxiv.org/abs/2503.13710v1
- Date: Mon, 17 Mar 2025 20:30:48 GMT
- Title: Improving Geometric Consistency for 360-Degree Neural Radiance Fields in Indoor Scenarios
- Authors: Iryna Repinetska, Anna Hilsmann, Peter Eisert,
- Abstract summary: Photo-realistic rendering and novel view synthesis play a crucial role in human-computer interaction tasks.<n>NeRFs often struggle in large, low-textured areas, producing cloudy artifacts known as ''floaters''<n>We introduce a novel depth loss function to enhance rendering quality in challenging, low-feature regions.
- Score: 3.5229503563299915
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Photo-realistic rendering and novel view synthesis play a crucial role in human-computer interaction tasks, from gaming to path planning. Neural Radiance Fields (NeRFs) model scenes as continuous volumetric functions and achieve remarkable rendering quality. However, NeRFs often struggle in large, low-textured areas, producing cloudy artifacts known as ''floaters'' that reduce scene realism, especially in indoor environments with featureless architectural surfaces like walls, ceilings, and floors. To overcome this limitation, prior work has integrated geometric constraints into the NeRF pipeline, typically leveraging depth information derived from Structure from Motion or Multi-View Stereo. Yet, conventional RGB-feature correspondence methods face challenges in accurately estimating depth in textureless regions, leading to unreliable constraints. This challenge is further complicated in 360-degree ''inside-out'' views, where sparse visual overlap between adjacent images further hinders depth estimation. In order to address these issues, we propose an efficient and robust method for computing dense depth priors, specifically tailored for large low-textured architectural surfaces in indoor environments. We introduce a novel depth loss function to enhance rendering quality in these challenging, low-feature regions, while complementary depth-patch regularization further refines depth consistency across other areas. Experiments with Instant-NGP on two synthetic 360-degree indoor scenes demonstrate improved visual fidelity with our method compared to standard photometric loss and Mean Squared Error depth supervision.
Related papers
- Guardians of the Hair: Rescuing Soft Boundaries in Depth, Stereo, and Novel Views [20.270591069701677]
This paper introduces HairGuard, a framework designed to recover fine-grained soft boundary details in 3D vision tasks.<n>Experiments demonstrate that HairGuard achieves state-of-the-art performance across monocular depth estimation, stereo image/video conversion, and novel view synthesis.
arXiv Detail & Related papers (2026-01-06T19:02:34Z) - Learning Fine-Grained Geometry for Sparse-View Splatting via Cascade Depth Loss [15.425094458647933]
We introduce Hierarchical Depth-Guided Splatting (HDGS), a depth supervision framework that progressively refines geometry from coarse to fine levels.<n>By enforcing multi-scale depth consistency, our method substantially improves structural fidelity in sparse-view scenarios.
arXiv Detail & Related papers (2025-05-28T12:16:42Z) - Deep Neural Networks for Accurate Depth Estimation with Latent Space Features [0.0]
This study introduces a novel depth estimation framework that leverages latent space features within a deep convolutional neural network.<n>The proposed model features dual encoder-decoder architecture, enabling both color-to-depth and depth-to-depth transformations.<n>The framework is thoroughly tested using the NYU Depth V2 dataset, where it sets a new benchmark.
arXiv Detail & Related papers (2025-02-17T13:11:35Z) - DN-Splatter: Depth and Normal Priors for Gaussian Splatting and Meshing [19.437747560051566]
We propose an adaptive depth loss based on the gradient of color images, improving depth estimation and novel view synthesis results over various baselines.
Our simple yet effective regularization technique enables direct mesh extraction from the Gaussian representation, yielding more physically accurate reconstructions of indoor scenes.
arXiv Detail & Related papers (2024-03-26T16:00:31Z) - PSDF: Prior-Driven Neural Implicit Surface Learning for Multi-view
Reconstruction [31.768161784030923]
The framework PSDF is proposed which resorts to external geometric priors from a pretrained MVS network and internal geometric priors inherent in the NISR model.
Experiments on the Tanks and Temples dataset show that PSDF achieves state-of-the-art performance on complex uncontrolled scenes.
arXiv Detail & Related papers (2024-01-23T13:30:43Z) - VoxNeRF: Bridging Voxel Representation and Neural Radiance Fields for Enhanced Indoor View Synthesis [73.50359502037232]
VoxNeRF is a novel approach to enhance the quality and efficiency of neural indoor reconstruction and novel view synthesis.<n>We propose an efficient voxel-guided sampling technique that allocates computational resources to selectively the most relevant segments of rays.<n>Our approach is validated with extensive experiments on ScanNet and ScanNet++.
arXiv Detail & Related papers (2023-11-09T11:32:49Z) - ConsistentNeRF: Enhancing Neural Radiance Fields with 3D Consistency for
Sparse View Synthesis [99.06490355990354]
We propose ConsistentNeRF, a method that leverages depth information to regularize both multi-view and single-view 3D consistency among pixels.
Our approach can considerably enhance model performance in sparse view conditions, achieving improvements of up to 94% in PSNR, in SSIM, and 31% in LPIPS.
arXiv Detail & Related papers (2023-05-18T15:18:01Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - Delicate Textured Mesh Recovery from NeRF via Adaptive Surface
Refinement [78.48648360358193]
We present a novel framework that generates textured surface meshes from images.
Our approach begins by efficiently initializing the geometry and view-dependency appearance with a NeRF.
We jointly refine the appearance with geometry and bake it into texture images for real-time rendering.
arXiv Detail & Related papers (2023-03-03T17:14:44Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.<n>Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.<n>Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - StructNeRF: Neural Radiance Fields for Indoor Scenes with Structural
Hints [23.15914545835831]
StructNeRF is a solution to novel view synthesis for indoor scenes with sparse inputs.
Our method improves both the geometry and the view synthesis performance of NeRF without any additional training on external data.
arXiv Detail & Related papers (2022-09-12T14:33:27Z) - SelfDeco: Self-Supervised Monocular Depth Completion in Challenging
Indoor Environments [50.761917113239996]
We present a novel algorithm for self-supervised monocular depth completion.
Our approach is based on training a neural network that requires only sparse depth measurements and corresponding monocular video sequences without dense depth labels.
Our self-supervised algorithm is designed for challenging indoor environments with textureless regions, glossy and transparent surface, non-Lambertian surfaces, moving people, longer and diverse depth ranges and scenes captured by complex ego-motions.
arXiv Detail & Related papers (2020-11-10T08:55:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.