ROI-NeRFs: Hi-Fi Visualization of Objects of Interest within a Scene by NeRFs Composition
- URL: http://arxiv.org/abs/2502.12673v1
- Date: Tue, 18 Feb 2025 09:24:15 GMT
- Title: ROI-NeRFs: Hi-Fi Visualization of Objects of Interest within a Scene by NeRFs Composition
- Authors: Quoc-Anh Bui, Gilles Rougeron, GĂ©raldine Morin, Simone Gasparini,
- Abstract summary: This study addresses the challenge of visualizing objects within large-scale scenes at a high level of detail using Neural Radiance Fields (NeRFs)
The proposed ROI-NeRFs framework divides the scene into a Scene NeRF, which represents the overall scene at moderate detail, and multiple ROI NeRFs that focus on user-defined objects of interest.
In the composition phase, a Ray-level Compositional Rendering technique combines information from the Scene NeRF and ROI NeRFs, allowing simultaneous multi-object rendering composition.
- Score: 1.0249620437940998
- License:
- Abstract: Efficient and accurate 3D reconstruction is essential for applications in cultural heritage. This study addresses the challenge of visualizing objects within large-scale scenes at a high level of detail (LOD) using Neural Radiance Fields (NeRFs). The aim is to improve the visual fidelity of chosen objects while maintaining the efficiency of the computations by focusing on details only for relevant content. The proposed ROI-NeRFs framework divides the scene into a Scene NeRF, which represents the overall scene at moderate detail, and multiple ROI NeRFs that focus on user-defined objects of interest. An object-focused camera selection module automatically groups relevant cameras for each NeRF training during the decomposition phase. In the composition phase, a Ray-level Compositional Rendering technique combines information from the Scene NeRF and ROI NeRFs, allowing simultaneous multi-object rendering composition. Quantitative and qualitative experiments conducted on two real-world datasets, including one on a complex eighteen's century cultural heritage room, demonstrate superior performance compared to baseline methods, improving LOD for object regions, minimizing artifacts, and without significantly increasing inference time.
Related papers
- Global-guided Focal Neural Radiance Field for Large-scale Scene Rendering [12.272724419136575]
We present a global-guided focal neural radiance field (GF-NeRF) that achieves high-fidelity rendering of large-scale scenes.
Our method achieves high-fidelity, natural rendering results on various types of large-scale datasets.
arXiv Detail & Related papers (2024-03-19T15:45:54Z) - A Comparative Neural Radiance Field (NeRF) 3D Analysis of Camera Poses
from HoloLens Trajectories and Structure from Motion [0.0]
We present a workflow for high-resolution 3D reconstructions almost directly from HoloLens data using Neural Radiance Fields (NeRFs)
NeRFs are trained using a set of camera poses and associated images as input to estimate density and color values for each position.
Results show that the internal camera poses lead to NeRF convergence with a PSNR of 25,dB with a simple rotation around the x-axis and enable a 3D reconstruction.
arXiv Detail & Related papers (2023-04-20T22:17:28Z) - A Large-Scale Outdoor Multi-modal Dataset and Benchmark for Novel View
Synthesis and Implicit Scene Reconstruction [26.122654478946227]
Neural Radiance Fields (NeRF) has achieved impressive results in single object scene reconstruction and novel view synthesis.
There is no unified outdoor scene dataset for large-scale NeRF evaluation due to expensive data acquisition and calibration costs.
In this paper, we propose a large-scale outdoor multi-modal dataset, OMMO dataset, containing complex land objects and scenes with calibrated images, point clouds and prompt annotations.
arXiv Detail & Related papers (2023-01-17T10:15:32Z) - ViewNeRF: Unsupervised Viewpoint Estimation Using Category-Level Neural
Radiance Fields [35.89557494372891]
We introduce ViewNeRF, a Neural Radiance Field-based viewpoint estimation method.
Our method uses an analysis by synthesis approach, combining a conditional NeRF with a viewpoint predictor and a scene encoder.
Our model shows competitive results on synthetic and real datasets.
arXiv Detail & Related papers (2022-12-01T11:16:11Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z) - NeRF-SOS: Any-View Self-supervised Object Segmentation from Complex
Real-World Scenes [80.59831861186227]
This paper carries out the exploration of self-supervised learning for object segmentation using NeRF for complex real-world scenes.
Our framework, called NeRF with Self-supervised Object NeRF-SOS, encourages NeRF models to distill compact geometry-aware segmentation clusters.
It consistently surpasses other 2D-based self-supervised baselines and predicts finer semantics masks than existing supervised counterparts.
arXiv Detail & Related papers (2022-09-19T06:03:17Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - BungeeNeRF: Progressive Neural Radiance Field for Extreme Multi-scale
Scene Rendering [145.95688637309746]
We introduce BungeeNeRF, a progressive neural radiance field that achieves level-of-detail rendering across drastically varied scales.
We demonstrate the superiority of BungeeNeRF in modeling diverse multi-scale scenes with drastically varying views on multiple data sources.
arXiv Detail & Related papers (2021-12-10T13:16:21Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.