Real-time Non-line-of-Sight imaging of dynamic scenes
- URL: http://arxiv.org/abs/2010.12737v1
- Date: Sat, 24 Oct 2020 01:40:06 GMT
- Title: Real-time Non-line-of-Sight imaging of dynamic scenes
- Authors: Ji Hyun Nam, Eric Brandt, Sebastian Bauer, Xiaochun Liu, Eftychios
Sifakis, Andreas Velten
- Abstract summary: Non-Line-of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight.
In the past, this method has suffered from the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality.
We show that SPAD (Single-Photon Avalanche Diode) array detectors with a total of just 28 pixels combined with a specifically extended Phasor Field reconstruction algorithm can reconstruct live real-time videos of non-retro-reflective NLOS scenes.
- Score: 11.199289771176238
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-Line-of-Sight (NLOS) imaging aims at recovering the 3D geometry of
objects that are hidden from the direct line of sight. In the past, this method
has suffered from the weak available multibounce signal limiting scene size,
capture speed, and reconstruction quality. While algorithms capable of
reconstructing scenes at several frames per second have been demonstrated,
real-time NLOS video has only been demonstrated for retro-reflective objects
where the NLOS signal strength is enhanced by 4 orders of magnitude or more.
Furthermore, it has also been noted that the signal-to-noise ratio of
reconstructions in NLOS methods drops quickly with distance and past
reconstructions, therefore, have been limited to small scenes with depths of
few meters. Actual models of noise and resolution in the scene have been
simplistic, ignoring many of the complexities of the problem. We show that SPAD
(Single-Photon Avalanche Diode) array detectors with a total of just 28 pixels
combined with a specifically extended Phasor Field reconstruction algorithm can
reconstruct live real-time videos of non-retro-reflective NLOS scenes. We
provide an analysis of the Signal-to-Noise-Ratio (SNR) of our reconstructions
and show that for our method it is possible to reconstruct the scene such that
SNR, motion blur, angular resolution, and depth resolution are all independent
of scene size suggesting that reconstruction of very large scenes may be
possible. In the future, the light efficiency for NLOS imaging systems can be
improved further by adding more pixels to the sensor array.
Related papers
- Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z) - NeRFusion: Fusing Radiance Fields for Large-Scale Scene Reconstruction [50.54946139497575]
We propose NeRFusion, a method that combines the advantages of NeRF and TSDF-based fusion techniques to achieve efficient large-scale reconstruction and photo-realistic rendering.
We demonstrate that NeRFusion achieves state-of-the-art quality on both large-scale indoor and small-scale object scenes, with substantially faster reconstruction than NeRF and other recent methods.
arXiv Detail & Related papers (2022-03-21T18:56:35Z) - T\"oRF: Time-of-Flight Radiance Fields for Dynamic Scene View Synthesis [32.878225196378374]
We introduce a neural representation based on an image formation model for continuous-wave ToF cameras.
We show that this approach improves robustness of dynamic scene reconstruction to erroneous calibration and large motions.
arXiv Detail & Related papers (2021-09-30T17:12:59Z) - Towards Non-Line-of-Sight Photography [48.491977359971855]
Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects.
Active NLOS imaging systems rely on the capture of the time of flight of light through the scene.
We propose a new problem formulation, called NLOS photography, to specifically address this deficiency.
arXiv Detail & Related papers (2021-09-16T08:07:13Z) - Real-time dense 3D Reconstruction from monocular video data captured by
low-cost UAVs [0.3867363075280543]
Real-time 3D reconstruction enables fast dense mapping of the environment which benefits numerous applications, such as navigation or live evaluation of an emergency.
In contrast to most real-time capable approaches, our approach does not need an explicit depth sensor.
By exploiting the self-motion of the unmanned aerial vehicle (UAV) flying with oblique view around buildings, we estimate both camera trajectory and depth for selected images with enough novel content.
arXiv Detail & Related papers (2021-04-21T13:12:17Z) - HDR Video Reconstruction with Tri-Exposure Quad-Bayer Sensors [14.844162451328321]
We propose a novel high dynamic range (construction) video reconstruction method with new tri-exposure quad-bayer sensors.
Thanks to the larger number of exposure sets and their spatially uniform deployment over a frame, they are more robust to noise and spatial artifacts than previous spatially varying exposure (SVE) HDR video methods.
We show that the tri-exposure quad-bayer with our solution is more optimal to capture than previous reconstruction methods.
arXiv Detail & Related papers (2021-03-19T18:40:09Z) - Polka Lines: Learning Structured Illumination and Reconstruction for
Active Stereo [52.68109922159688]
We introduce a novel differentiable image formation model for active stereo, relying on both wave and geometric optics, and a novel trinocular reconstruction network.
The jointly optimized pattern, which we dub "Polka Lines," together with the reconstruction network, achieve state-of-the-art active-stereo depth estimates across imaging conditions.
arXiv Detail & Related papers (2020-11-26T04:02:43Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.