Towards Non-Line-of-Sight Photography
- URL: http://arxiv.org/abs/2109.07783v1
- Date: Thu, 16 Sep 2021 08:07:13 GMT
- Title: Towards Non-Line-of-Sight Photography
- Authors: Jiayong Peng, Fangzhou Mu, Ji Hyun Nam, Siddeshwar Raghavan, Yin Li,
Andreas Velten, and Zhiwei Xiong
- Abstract summary: Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects.
Active NLOS imaging systems rely on the capture of the time of flight of light through the scene.
We propose a new problem formulation, called NLOS photography, to specifically address this deficiency.
- Score: 48.491977359971855
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce
indirect reflections from the hidden objects. Active NLOS imaging systems rely
on the capture of the time of flight of light through the scene, and have shown
great promise for the accurate and robust reconstruction of hidden scenes
without the need for specialized scene setups and prior assumptions. Despite
that existing methods can reconstruct 3D geometries of the hidden scene with
excellent depth resolution, accurately recovering object textures and
appearance with high lateral resolution remains an challenging problem. In this
work, we propose a new problem formulation, called NLOS photography, to
specifically address this deficiency. Rather than performing an intermediate
estimate of the 3D scene geometry, our method follows a data-driven approach
and directly reconstructs 2D images of a NLOS scene that closely resemble the
pictures taken with a conventional camera from the location of the relay wall.
This formulation largely simplifies the challenging reconstruction problem by
bypassing the explicit modeling of 3D geometry, and enables the learning of a
deep model with a relatively small training dataset. The results are NLOS
reconstructions of unprecedented lateral resolution and image quality.
Related papers
- R3D3: Dense 3D Reconstruction of Dynamic Scenes from Multiple Cameras [106.52409577316389]
R3D3 is a multi-camera system for dense 3D reconstruction and ego-motion estimation.
Our approach exploits spatial-temporal information from multiple cameras, and monocular depth refinement.
We show that this design enables a dense, consistent 3D reconstruction of challenging, dynamic outdoor environments.
arXiv Detail & Related papers (2023-08-28T17:13:49Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Multi-View Neural Surface Reconstruction with Structured Light [7.709526244898887]
Three-dimensional (3D) object reconstruction based on differentiable rendering (DR) is an active research topic in computer vision.
We introduce active sensing with structured light (SL) into multi-view 3D object reconstruction based on DR to learn the unknown geometry and appearance of arbitrary scenes and camera poses.
Our method realizes high reconstruction accuracy in the textureless region and reduces efforts for camera pose calibration.
arXiv Detail & Related papers (2022-11-22T03:10:46Z) - Learning to Recover 3D Scene Shape from a Single Image [98.20106822614392]
We propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image.
We then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape.
arXiv Detail & Related papers (2020-12-17T02:35:13Z) - Real-time Non-line-of-Sight imaging of dynamic scenes [11.199289771176238]
Non-Line-of-Sight (NLOS) imaging aims at recovering the 3D geometry of objects that are hidden from the direct line of sight.
In the past, this method has suffered from the weak available multibounce signal limiting scene size, capture speed, and reconstruction quality.
We show that SPAD (Single-Photon Avalanche Diode) array detectors with a total of just 28 pixels combined with a specifically extended Phasor Field reconstruction algorithm can reconstruct live real-time videos of non-retro-reflective NLOS scenes.
arXiv Detail & Related papers (2020-10-24T01:40:06Z) - Reconstruct, Rasterize and Backprop: Dense shape and pose estimation
from a single image [14.9851111159799]
This paper presents a new system to obtain dense object reconstructions along with 6-DoF poses from a single image.
We leverage recent advances in differentiable rendering (in particular, robotics) to close the loop with 3D reconstruction in camera frame.
arXiv Detail & Related papers (2020-04-25T20:53:43Z) - Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images [59.906948203578544]
We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object.
We first estimate per-view depth maps using a deep multi-view stereo network.
These depth maps are used to coarsely align the different views.
We propose a novel multi-view reflectance estimation network architecture.
arXiv Detail & Related papers (2020-03-27T21:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.