AONeuS: A Neural Rendering Framework for Acoustic-Optical Sensor Fusion
- URL: http://arxiv.org/abs/2402.03309v3
- Date: Fri, 2 Aug 2024 19:02:51 GMT
- Title: AONeuS: A Neural Rendering Framework for Acoustic-Optical Sensor Fusion
- Authors: Mohamad Qadri, Kevin Zhang, Akshay Hinduja, Michael Kaess, Adithya Pediredla, Christopher A. Metzler,
- Abstract summary: Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring.
Our work develops a physics-based multimodal acoustic-optical neural surface reconstruction framework.
By fusing these complementary modalities, our framework can reconstruct accurate high-resolution 3D surfaces from measurements captured over heavily-restricted baselines.
- Score: 25.32113731681485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Underwater perception and 3D surface reconstruction are challenging problems with broad applications in construction, security, marine archaeology, and environmental monitoring. Treacherous operating conditions, fragile surroundings, and limited navigation control often dictate that submersibles restrict their range of motion and, thus, the baseline over which they can capture measurements. In the context of 3D scene reconstruction, it is well-known that smaller baselines make reconstruction more challenging. Our work develops a physics-based multimodal acoustic-optical neural surface reconstruction framework (AONeuS) capable of effectively integrating high-resolution RGB measurements with low-resolution depth-resolved imaging sonar measurements. By fusing these complementary modalities, our framework can reconstruct accurate high-resolution 3D surfaces from measurements captured over heavily-restricted baselines. Through extensive simulations and in-lab experiments, we demonstrate that AONeuS dramatically outperforms recent RGB-only and sonar-only inverse-differentiable-rendering--based surface reconstruction methods. A website visualizing the results of our paper is located at this address: https://aoneus.github.io/
Related papers
- GSurf: 3D Reconstruction via Signed Distance Fields with Direct Gaussian Supervision [0.0]
Surface reconstruction from multi-view images is a core challenge in 3D vision.
Recent studies have explored signed distance fields (SDF) within Neural Radiance Fields (NeRF) to achieve high-fidelity surface reconstructions.
We introduce GSurf, a novel end-to-end method for learning a signed distance field directly from Gaussian primitives.
GSurf achieves faster training and rendering speeds while delivering 3D reconstruction quality comparable to neural implicit surface methods, such as VolSDF and NeuS.
arXiv Detail & Related papers (2024-11-24T05:55:19Z) - Simultaneous Map and Object Reconstruction [66.66729715211642]
We present a method for dynamic surface reconstruction of large-scale urban scenes from LiDAR.
We take inspiration from recent novel view synthesis methods and pose the reconstruction problem as a global optimization.
By careful modeling of continuous-time motion, our reconstructions can compensate for the rolling shutter effects of rotating LiDAR sensors.
arXiv Detail & Related papers (2024-06-19T23:53:31Z) - NeSLAM: Neural Implicit Mapping and Self-Supervised Feature Tracking With Depth Completion and Denoising [23.876281686625134]
We present NeSLAM, a framework that achieves accurate and dense depth estimation, robust camera tracking, and realistic synthesis of novel views.
Experiments on various indoor datasets demonstrate the effectiveness and accuracy of the system in reconstruction, tracking quality, and novel view synthesis.
arXiv Detail & Related papers (2024-03-29T07:59:37Z) - UniSDF: Unifying Neural Representations for High-Fidelity 3D
Reconstruction of Complex Scenes with Reflections [92.38975002642455]
We propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
Our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces.
arXiv Detail & Related papers (2023-12-20T18:59:42Z) - Indoor Scene Reconstruction with Fine-Grained Details Using Hybrid Representation and Normal Prior Enhancement [50.56517624931987]
The reconstruction of indoor scenes from multi-view RGB images is challenging due to the coexistence of flat and texture-less regions.
Recent methods leverage neural radiance fields aided by predicted surface normal priors to recover the scene geometry.
This work aims to reconstruct high-fidelity surfaces with fine-grained details by addressing the above limitations.
arXiv Detail & Related papers (2023-09-14T12:05:29Z) - Incremental Dense Reconstruction from Monocular Video with Guided Sparse
Feature Volume Fusion [23.984073189849024]
This letter proposes a real-time feature volume-based dense reconstruction method that predicts TSDF values from a novel sparsified deep feature volume.
An uncertainty-aware multi-view stereo network is leveraged to infer initial voxel locations of the physical surface in a sparse feature volume.
Our method is shown to produce more complete reconstructions with finer detail in many cases.
arXiv Detail & Related papers (2023-05-24T09:06:01Z) - Looking Through the Glass: Neural Surface Reconstruction Against High
Specular Reflections [72.45512144682554]
We present a novel surface reconstruction framework, NeuS-HSR, based on implicit neural rendering.
In NeuS-HSR, the object surface is parameterized as an implicit signed distance function.
We show that NeuS-HSR outperforms state-of-the-art approaches for accurate and robust target surface reconstruction against HSR.
arXiv Detail & Related papers (2023-04-18T02:34:58Z) - MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface
Reconstruction [72.05649682685197]
State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views.
This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints.
Motivated by recent advances in the area of monocular geometry prediction, we explore the utility these cues provide for improving neural implicit surface reconstruction.
arXiv Detail & Related papers (2022-06-01T17:58:15Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - HRBF-Fusion: Accurate 3D reconstruction from RGB-D data using on-the-fly
implicits [11.83399015126983]
Reconstruction of high-fidelity 3D objects or scenes is a fundamental research problem.
Recent advances in RGB-D fusion have demonstrated the potential of producing 3D models from consumer-level RGB-D cameras.
Existing approaches suffer from the accumulation of errors in camera tracking and distortion in the reconstruction.
arXiv Detail & Related papers (2022-02-03T20:20:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.