Event-based Motion-Robust Accurate Shape Estimation for Mixed
Reflectance Scenes
- URL: http://arxiv.org/abs/2311.09652v1
- Date: Thu, 16 Nov 2023 08:12:10 GMT
- Title: Event-based Motion-Robust Accurate Shape Estimation for Mixed
Reflectance Scenes
- Authors: Aniket Dashpute, Jiazhang Wang, James Taylor, Oliver Cossairt, Ashok
Veeraraghavan, Florian Willomitzer
- Abstract summary: We present a novel event-based structured light system that enables fast 3D imaging of mixed reflectance scenes with high accuracy.
We use epipolar constraints that intrinsically enable the measured reflections into decomposing diffuse, two-bounce specular, and other multi-bounce reflections.
The resulting system achieves fast and motion-robust reconstructions of mixed reflectance scenes with 500 $mu$m accuracy.
- Score: 17.446182782836747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event-based structured light systems have recently been introduced as an
exciting alternative to conventional frame-based triangulation systems for the
3D measurements of diffuse surfaces. Important benefits include the fast
capture speed and the high dynamic range provided by the event camera - albeit
at the cost of lower data quality. So far, both low-accuracy event-based as
well as high-accuracy frame-based 3D imaging systems are tailored to a specific
surface type, such as diffuse or specular, and can not be used for a broader
class of object surfaces ("mixed reflectance scenes"). In this paper, we
present a novel event-based structured light system that enables fast 3D
imaging of mixed reflectance scenes with high accuracy. On the captured events,
we use epipolar constraints that intrinsically enable decomposing the measured
reflections into diffuse, two-bounce specular, and other multi-bounce
reflections. The diffuse objects in the scene are reconstructed using
triangulation. Eventually, the reconstructed diffuse scene parts are used as a
"display" to evaluate the specular scene parts via deflectometry. This novel
procedure allows us to use the entire scene as a virtual screen, using only a
scanning laser and an event camera. The resulting system achieves fast and
motion-robust (14Hz) reconstructions of mixed reflectance scenes with < 500
$\mu$m accuracy. Moreover, we introduce a "superfast" capture mode (250Hz) for
the 3D measurement of diffuse scenes.
Related papers
- E-3DGS: Gaussian Splatting with Exposure and Motion Events [29.042018288378447]
We propose E-3DGS, a novel event-based approach that partitions events into motion and exposure.
We introduce a novel integration of 3DGS with exposure events for high-quality reconstruction of explicit scene representations.
Our method is faster and delivers better reconstruction quality than event-based NeRF while being more cost-effective than NeRF methods.
arXiv Detail & Related papers (2024-10-22T13:17:20Z) - EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Transientangelo: Few-Viewpoint Surface Reconstruction Using Single-Photon Lidar [8.464054039931245]
Lidar captures 3D scene geometry by emitting pulses of light to a target and recording the speed-of-light time delay of the reflected light.
conventional lidar systems do not output the raw, captured waveforms of backscattered light.
We develop new regularization strategies that improve robustness to photon noise, enabling accurate surface reconstruction with as few as 10 photons per pixel.
arXiv Detail & Related papers (2024-08-22T08:12:09Z) - UniSDF: Unifying Neural Representations for High-Fidelity 3D
Reconstruction of Complex Scenes with Reflections [92.38975002642455]
We propose UniSDF, a general purpose 3D reconstruction method that can reconstruct large complex scenes with reflections.
Our method is able to robustly reconstruct complex large-scale scenes with fine details and reflective surfaces.
arXiv Detail & Related papers (2023-12-20T18:59:42Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z) - Event-based Stereo Visual Odometry [42.77238738150496]
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
We seek to maximize thetemporal consistency of stereo event-based data while using a simple and efficient representation.
arXiv Detail & Related papers (2020-07-30T15:53:28Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.