360Roam: Real-Time Indoor Roaming Using Geometry-Aware 360$^\circ$
Radiance Fields
- URL: http://arxiv.org/abs/2208.02705v2
- Date: Tue, 28 Nov 2023 16:45:07 GMT
- Title: 360Roam: Real-Time Indoor Roaming Using Geometry-Aware 360$^\circ$
Radiance Fields
- Authors: Huajian Huang, Yingshu Chen, Tianjia Zhang and Sai-Kit Yeung
- Abstract summary: Virtual tour among sparse 360$circ$ images is widely used while hindering smooth and immersive roaming experiences.
We propose a novel approach using geometry-aware radiance fields with adaptively assigned local radiance fields.
Our system effectively utilizes positional encoding and compact neural networks to rendering enhance quality and speed.
- Score: 18.295768486318313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual tour among sparse 360$^\circ$ images is widely used while hindering
smooth and immersive roaming experiences. The emergence of Neural Radiance
Field (NeRF) has showcased significant progress in synthesizing novel views,
unlocking the potential for immersive scene exploration. Nevertheless, previous
NeRF works primarily focused on object-centric scenarios, resulting in
noticeable performance degradation when applied to outward-facing and
large-scale scenes due to limitations in scene parameterization. To achieve
seamless and real-time indoor roaming, we propose a novel approach using
geometry-aware radiance fields with adaptively assigned local radiance fields.
Initially, we employ multiple 360$^\circ$ images of an indoor scene to
progressively reconstruct explicit geometry in the form of a probabilistic
occupancy map, derived from a global omnidirectional radiance field.
Subsequently, we assign local radiance fields through an adaptive
divide-and-conquer strategy based on the recovered geometry. By incorporating
geometry-aware sampling and decomposition of the global radiance field, our
system effectively utilizes positional encoding and compact neural networks to
enhance rendering quality and speed. Additionally, the extracted floorplan of
the scene aids in providing visual guidance, contributing to a realistic
roaming experience. To demonstrate the effectiveness of our system, we curated
a diverse dataset of 360$^\circ$ images encompassing various real-life scenes,
on which we conducted extensive experiments. Quantitative and qualitative
comparisons against baseline approaches illustrated the superior performance of
our system in large-scale indoor scene roaming.
Related papers
- TimeNeRF: Building Generalizable Neural Radiance Fields across Time from Few-Shot Input Views [6.319765967588987]
TimeNeRF is a generalizable neural rendering approach for rendering novel views at arbitrary viewpoints.<n>We show that TimeNeRF can render novel views in a few-shot setting without per-scene optimization.<n>It excels in creating realistic novel views that transition smoothly across different times.
arXiv Detail & Related papers (2025-07-18T14:07:02Z) - Improving Geometric Consistency for 360-Degree Neural Radiance Fields in Indoor Scenarios [3.5229503563299915]
Photo-realistic rendering and novel view synthesis play a crucial role in human-computer interaction tasks.
NeRFs often struggle in large, low-textured areas, producing cloudy artifacts known as ''floaters''
We introduce a novel depth loss function to enhance rendering quality in challenging, low-feature regions.
arXiv Detail & Related papers (2025-03-17T20:30:48Z) - Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Local Implicit Ray Function for Generalizable Radiance Field
Representation [20.67358742158244]
We propose LIRF (Local Implicit Ray Function), a generalizable neural rendering approach for novel view rendering.
Given 3D positions within conical frustums, LIRF takes 3D coordinates and the features of conical frustums as inputs and predicts a local volumetric radiance field.
Since the coordinates are continuous, LIRF renders high-quality novel views at a continuously-valued scale via volume rendering.
arXiv Detail & Related papers (2023-04-25T11:52:33Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - Neural Fields meet Explicit Geometric Representation for Inverse
Rendering of Urban Scenes [62.769186261245416]
We present a novel inverse rendering framework for large urban scenes capable of jointly reconstructing the scene geometry, spatially-varying materials, and HDR lighting from a set of posed RGB images with optional depth.
Specifically, we use a neural field to account for the primary rays, and use an explicit mesh (reconstructed from the underlying neural field) for modeling secondary rays that produce higher-order lighting effects such as cast shadows.
arXiv Detail & Related papers (2023-04-06T17:51:54Z) - I$^2$-SDF: Intrinsic Indoor Scene Reconstruction and Editing via
Raytracing in Neural SDFs [31.968515496970312]
I$2$-SDF is a new method for intrinsic indoor scene reconstruction and editing using differentiable Monte Carlo raytracing on neural signed distance fields (SDFs)
We introduce a novel bubble loss for fine-grained small objects and error-guided adaptive sampling scheme to improve the reconstruction quality on large-scale indoor scenes.
arXiv Detail & Related papers (2023-03-14T05:29:34Z) - DARF: Depth-Aware Generalizable Neural Radiance Field [51.29437249009986]
We propose the Depth-Aware Generalizable Neural Radiance Field (DARF) with a Depth-Aware Dynamic Sampling (DADS) strategy.
Our framework infers the unseen scenes on both pixel level and geometry level with only a few input images.
Compared with state-of-the-art generalizable NeRF methods, DARF reduces samples by 50%, while improving rendering quality and depth estimation.
arXiv Detail & Related papers (2022-12-05T14:00:59Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.