Hyperspectral 3D Mapping of Underwater Environments
- URL: http://arxiv.org/abs/2110.06571v1
- Date: Wed, 13 Oct 2021 08:37:22 GMT
- Title: Hyperspectral 3D Mapping of Underwater Environments
- Authors: Maxime Ferrera, Aur\'elien Arnaubec, Klemen Istenic, Nuno Gracias,
Touria Bajjouk (IFREMER)
- Abstract summary: We present an initial method for creating hyperspectral 3D reconstructions of underwater environments.
By fusing the data gathered by a classical RGB camera, an inertial navigation system and a hyperspectral push-broom camera, we show that the proposed method creates highly accurate 3D reconstructions with hyperspectral textures.
- Score: 0.7087237546722617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral imaging has been increasingly used for underwater survey
applications over the past years. As many hyperspectral cameras work as
push-broom scanners, their use is usually limited to the creation of
photo-mosaics based on a flat surface approximation and by interpolating the
camera pose from dead-reckoning navigation. Yet, because of drift in the
navigation and the mostly wrong flat surface assumption, the quality of the
obtained photo-mosaics is often too low to support adequate analysis.In this
paper we present an initial method for creating hyperspectral 3D
reconstructions of underwater environments. By fusing the data gathered by a
classical RGB camera, an inertial navigation system and a hyperspectral
push-broom camera, we show that the proposed method creates highly accurate 3D
reconstructions with hyperspectral textures. We propose to combine techniques
from simultaneous localization and mapping, structure-from-motion and 3D
reconstruction and advantageously use them to create 3D models with
hyperspectral texture, allowing us to overcome the flat surface assumption and
the classical limitation of dead-reckoning navigation.
Related papers
- UniK3D: Universal Camera Monocular 3D Estimation [62.06785782635153]
We present UniK3D, the first generalizable method for monocular 3D estimation able to model any camera.
Our method introduces a spherical 3D representation which allows for better disentanglement of camera and scene geometry.
A comprehensive zero-shot evaluation on 13 diverse datasets demonstrates the state-of-the-art performance of UniK3D across 3D, depth, and camera metrics.
arXiv Detail & Related papers (2025-03-20T17:49:23Z) - IM360: Textured Mesh Reconstruction for Large-scale Indoor Mapping with 360$^\circ$ Cameras [53.53895891356167]
We present a novel 3D reconstruction pipeline for 360$circ$ cameras for 3D mapping and rendering of indoor environments.
Our approach (IM360) leverages the wide field of view of omnidirectional images and integrates the spherical camera model into every core component of the SfM pipeline.
We evaluate our pipeline on large-scale indoor scenes from the Matterport3D and Stanford2D3D datasets.
arXiv Detail & Related papers (2025-02-18T05:15:19Z) - FreeSplatter: Pose-free Gaussian Splatting for Sparse-view 3D Reconstruction [59.77970844874235]
We present FreeSplatter, a feed-forward reconstruction framework capable of generating high-quality 3D Gaussians from sparse-view images.
FreeSplatter is built upon a streamlined transformer architecture, comprising sequential self-attention blocks.
We show FreeSplatter's potential in enhancing the productivity of downstream applications, such as text/image-to-3D content creation.
arXiv Detail & Related papers (2024-12-12T18:52:53Z) - Dense Dispersed Structured Light for Hyperspectral 3D Imaging of Dynamic Scenes [9.050557698554696]
Hyperspectral 3D imaging captures both depth maps and hyperspectral images, enabling geometric and material analysis.
Recent methods achieve high spectral and depth accuracy; however, they require long acquisition times often over several minutes or rely on large, expensive systems.
We present an accurate hyperspectral 3D imaging method for dynamic scenes that utilizes stereo RGB cameras and an affordable diffraction grating film.
arXiv Detail & Related papers (2024-12-02T05:30:18Z) - PGSR: Planar-based Gaussian Splatting for Efficient and High-Fidelity Surface Reconstruction [37.14913599050765]
We propose a fast planar-based Gaussian splatting reconstruction representation (PGSR) to achieve high-fidelity surface reconstruction.
We then introduce single-view geometric, multi-view photometric, and geometric regularization to preserve global geometric accuracy.
Our method achieves fast training and rendering while maintaining high-fidelity rendering and geometric reconstruction, outperforming 3DGS-based and NeRF-based methods.
arXiv Detail & Related papers (2024-06-10T17:59:01Z) - Normal-guided Detail-Preserving Neural Implicit Function for High-Fidelity 3D Surface Reconstruction [6.4279213810512665]
This paper shows that training neural representations with first-order differential properties (surface normals) leads to highly accurate 3D surface reconstruction.
Experiments demonstrate that our method achieves state-of-the-art reconstruction accuracy with a minimal number of views.
arXiv Detail & Related papers (2024-06-07T11:48:47Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - Ghost on the Shell: An Expressive Representation of General 3D Shapes [97.76840585617907]
Meshes are appealing since they enable fast physics-based rendering with realistic material and lighting.
Recent work on reconstructing and statistically modeling 3D shapes has critiqued meshes as being topologically inflexible.
We parameterize open surfaces by defining a manifold signed distance field on watertight surfaces.
G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks.
arXiv Detail & Related papers (2023-10-23T17:59:52Z) - R3D3: Dense 3D Reconstruction of Dynamic Scenes from Multiple Cameras [106.52409577316389]
R3D3 is a multi-camera system for dense 3D reconstruction and ego-motion estimation.
Our approach exploits spatial-temporal information from multiple cameras, and monocular depth refinement.
We show that this design enables a dense, consistent 3D reconstruction of challenging, dynamic outdoor environments.
arXiv Detail & Related papers (2023-08-28T17:13:49Z) - 3D Reconstruction of Spherical Images based on Incremental Structure
from Motion [2.6432771146480283]
This study investigates the algorithms for the relative orientation using spherical correspondences, absolute orientation using 3D correspondences between scene and spherical points, and the cost functions for BA (bundle adjustment) optimization.
An incremental SfM (Structure from Motion) workflow has been proposed for spherical images using the above-mentioned algorithms.
arXiv Detail & Related papers (2023-06-22T09:49:28Z) - 3D reconstruction from spherical images: A review of techniques,
applications, and prospects [2.6432771146480283]
3D reconstruction plays an increasingly important role in modern photogrammetric systems.
With the rapid evolution and extensive use of professional and consumer-grade spherical cameras, spherical images show great potential for the 3D modeling of urban and indoor scenes.
This research provides a thorough survey of the state-of-the-art for 3D reconstruction of spherical images in terms of data acquisition, feature detection and matching, image orientation, and dense matching.
arXiv Detail & Related papers (2023-02-09T08:45:27Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Neural Implicit Surface Reconstruction from Noisy Camera Observations [3.7768557836887138]
We propose a method for learning 3D surfaces from noisy camera parameters.
We show that we can learn camera parameters together with learning the surface representation, and demonstrate good quality 3D surface reconstruction even with noisy camera observations.
arXiv Detail & Related papers (2022-10-02T13:35:51Z) - 3d sequential image mosaicing for underwater navigation and mapping [0.0]
We propose a modified image mosaicing algorithm that coupled with image-based real-time navigation and mapping algorithms provides two visual navigation aids.
The implemented procedure is detailed, and experiments in different underwater scenarios presented and discussed.
arXiv Detail & Related papers (2021-10-04T12:32:51Z) - Towards Non-Line-of-Sight Photography [48.491977359971855]
Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects.
Active NLOS imaging systems rely on the capture of the time of flight of light through the scene.
We propose a new problem formulation, called NLOS photography, to specifically address this deficiency.
arXiv Detail & Related papers (2021-09-16T08:07:13Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.