Dispersed Structured Light for Hyperspectral 3D Imaging
- URL: http://arxiv.org/abs/2311.18287v2
- Date: Mon, 25 Mar 2024 05:34:58 GMT
- Title: Dispersed Structured Light for Hyperspectral 3D Imaging
- Authors: Suhyun Shin, Seokjun Choi, Felix Heide, Seung-Hwan Baek,
- Abstract summary: Dispersed Structured Light (DSL) is a cost-effective and compact method for accurate hyperspectral 3D imaging.
DSL achieves spectral accuracy of 18.8nm full-width half-millimeter and depth error of 1mm.
- Score: 39.23448863407426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hyperspectral 3D imaging aims to acquire both depth and spectral information of a scene. However, existing methods are either prohibitively expensive and bulky or compromise on spectral and depth accuracy. In this work, we present Dispersed Structured Light (DSL), a cost-effective and compact method for accurate hyperspectral 3D imaging. DSL modifies a traditional projector-camera system by placing a sub-millimeter thick diffraction grating film front of the projector. The grating disperses structured light based on light wavelength. To utilize the dispersed structured light, we devise a model for dispersive projection image formation and a per-pixel hyperspectral 3D reconstruction method. We validate DSL by instantiating a compact experimental prototype. DSL achieves spectral accuracy of 18.8nm full-width half-maximum (FWHM) and depth error of 1mm. We demonstrate that DSL outperforms prior work on practical hyperspectral 3D imaging. DSL promises accurate and practical hyperspectral 3D imaging for diverse application domains, including computer vision and graphics, cultural heritage, geology, and biology.
Related papers
- Dense Dispersed Structured Light for Hyperspectral 3D Imaging of Dynamic Scenes [9.050557698554696]
Hyperspectral 3D imaging captures both depth maps and hyperspectral images, enabling geometric and material analysis.
Recent methods achieve high spectral and depth accuracy; however, they require long acquisition times often over several minutes or rely on large, expensive systems.
We present an accurate hyperspectral 3D imaging method for dynamic scenes that utilizes stereo RGB cameras and an affordable diffraction grating film.
arXiv Detail & Related papers (2024-12-02T05:30:18Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - Towards 3D Vision with Low-Cost Single-Photon Cameras [24.711165102559438]
We present a method for reconstructing 3D shape of arbitrary Lambertian objects based on measurements by miniature, energy-efficient, low-cost single-photon cameras.
Our work draws a connection between image-based modeling and active range scanning and is a step towards 3D vision with single-photon cameras.
arXiv Detail & Related papers (2024-03-26T15:40:05Z) - Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging [36.208109063579066]
On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
arXiv Detail & Related papers (2023-11-17T15:08:15Z) - Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - MoGDE: Boosting Mobile Monocular 3D Object Detection with Ground Depth
Estimation [20.697822444708237]
We propose a novel Mono3D framework, called MoGDE, which constantly estimates the corresponding ground depth of an image.
MoGDE yields the best performance compared with the state-of-the-art methods by a large margin and is ranked number one on the KITTI 3D benchmark.
arXiv Detail & Related papers (2023-03-23T04:06:01Z) - Neural Radiance Fields Approach to Deep Multi-View Photometric Stereo [103.08512487830669]
We present a modern solution to the multi-view photometric stereo problem (MVPS)
We procure the surface orientation using a photometric stereo (PS) image formation model and blend it with a multi-view neural radiance field representation to recover the object's surface geometry.
Our method performs neural rendering of multi-view images while utilizing surface normals estimated by a deep photometric stereo network.
arXiv Detail & Related papers (2021-10-11T20:20:03Z) - Multifocal Stereoscopic Projection Mapping [24.101349988126692]
Current stereoscopic PM technology only satisfies binocular cues and is not capable of providing correct focus cues.
We propose a multifocal approach to mitigate a vergence--accommodation conflict (VAC) in stereoscopic PM.
A 3D CG object is projected from a synchronized high-speed projector only when the virtual image of the projected imagery is located at a desired distance.
arXiv Detail & Related papers (2021-10-08T06:13:10Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.