Redefining Recon: Bridging Gaps with UAVs, 360 degree Cameras, and
Neural Radiance Fields
- URL: http://arxiv.org/abs/2401.06143v1
- Date: Thu, 30 Nov 2023 14:21:29 GMT
- Title: Redefining Recon: Bridging Gaps with UAVs, 360 degree Cameras, and
Neural Radiance Fields
- Authors: Hartmut Surmann, Niklas Digakis, Jan-Nicklas Kremer, Julien Meine, Max
Schulte, Niklas Voigt
- Abstract summary: We introduce an innovative approach that synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs)
A NeRF, a specialized neural network, can deduce a 3D representation of any scene using 2D images and then synthesize it from various angles upon request.
We have tested our approach through recent post-fire scenario, underlining the efficacy of NeRFs even in challenging outdoor environments.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the realm of digital situational awareness during disaster situations,
accurate digital representations, like 3D models, play an indispensable role.
To ensure the safety of rescue teams, robotic platforms are often deployed to
generate these models. In this paper, we introduce an innovative approach that
synergizes the capabilities of compact Unmaned Arial Vehicles (UAVs), smaller
than 30 cm, equipped with 360 degree cameras and the advances of Neural
Radiance Fields (NeRFs). A NeRF, a specialized neural network, can deduce a 3D
representation of any scene using 2D images and then synthesize it from various
angles upon request. This method is especially tailored for urban environments
which have experienced significant destruction, where the structural integrity
of buildings is compromised to the point of barring entry-commonly observed
post-earthquakes and after severe fires. We have tested our approach through
recent post-fire scenario, underlining the efficacy of NeRFs even in
challenging outdoor environments characterized by water, snow, varying light
conditions, and reflective surfaces.
Related papers
- ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation [59.302770084115814]
We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
arXiv Detail & Related papers (2023-10-04T19:35:56Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Pre-NeRF 360: Enriching Unbounded Appearances for Neural Radiance Fields [8.634008996263649]
We propose a new framework to boost the performance of NeRF-based architectures.
Our solution overcomes several obstacles that plagued earlier versions of NeRF.
We introduce an updated version of the Nutrition5k dataset, known as the N5k360 dataset.
arXiv Detail & Related papers (2023-03-21T23:29:38Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - OmniNeRF: Hybriding Omnidirectional Distance and Radiance fields for
Neural Surface Reconstruction [22.994952933576684]
Ground-breaking research in the neural radiance field (NeRF) has dramatically improved the representation quality of 3D objects.
Some later studies improved NeRF by building truncated signed distance fields (TSDFs) but still suffer from the problem of blurred surfaces in 3D reconstruction.
In this work, this surface ambiguity is addressed by proposing a novel way of 3D shape representation, OmniNeRF.
arXiv Detail & Related papers (2022-09-27T14:39:23Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z) - Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields [43.69542675078766]
We present an extension of mip-NeRF that uses a non-linear scene parameterization, online distillation, and a novel distortion-based regularizer to overcome the challenges presented by unbounded scenes.
Our model, which we dub "mip-NeRF 360," reduces mean-squared error by 54% compared to mip-NeRF, and is able to produce realistic synthesized views and detailed depth maps.
arXiv Detail & Related papers (2021-11-23T18:51:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.