Efficient Non-Line-of-Sight Imaging from Transient Sinograms
- URL: http://arxiv.org/abs/2008.02787v1
- Date: Thu, 6 Aug 2020 17:50:50 GMT
- Title: Efficient Non-Line-of-Sight Imaging from Transient Sinograms
- Authors: Mariko Isogawa, Dorian Chan, Ye Yuan, Kris Kitani, Matthew O'Toole
- Abstract summary: Non-line-of-sight (NLOS) imaging techniques use light that diffusely reflects off of visible surfaces (e.g., walls) to see around corners.
One approach involves using pulsed lasers and ultrafast sensors to measure the travel time of multiply scattered light.
We propose a more efficient form of NLOS scanning that reduces both acquisition times and computational requirements.
- Score: 36.154873075911404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-line-of-sight (NLOS) imaging techniques use light that diffusely reflects
off of visible surfaces (e.g., walls) to see around corners. One approach
involves using pulsed lasers and ultrafast sensors to measure the travel time
of multiply scattered light. Unlike existing NLOS techniques that generally
require densely raster scanning points across the entirety of a relay wall, we
explore a more efficient form of NLOS scanning that reduces both acquisition
times and computational requirements. We propose a circular and confocal
non-line-of-sight (C2NLOS) scan that involves illuminating and imaging a common
point, and scanning this point in a circular path along a wall. We observe that
(1) these C2NLOS measurements consist of a superposition of sinusoids, which we
refer to as a transient sinogram, (2) there exists computationally efficient
reconstruction procedures that transform these sinusoidal measurements into 3D
positions of hidden scatterers or NLOS images of hidden objects, and (3)
despite operating on an order of magnitude fewer measurements than previous
approaches, these C2NLOS scans provide sufficient information about the hidden
scene to solve these different NLOS imaging tasks. We show results from both
simulated and real C2NLOS scans.
Related papers
- Iterating the Transient Light Transport Matrix for Non-Line-of-Sight Imaging [4.563825593952498]
Time-resolved Non-line-of-sight (NLOS) imaging employs an active system that measures part of the Transient Light Transport Matrix (TLTM)
In this work, we demonstrate that the full TLTM can be processed with efficient algorithms to focus and detect our illumination in different parts of the hidden scene.
arXiv Detail & Related papers (2024-12-13T17:35:42Z) - A Plug-and-Play Algorithm for 3D Video Super-Resolution of Single-Photon LiDAR data [5.378429123269604]
Single-photon avalanche diodes (SPADs) are advanced sensors capable of detecting individual photons and recording their arrival times with picosecond resolution.
We propose a novel computational imaging algorithm to improve the 3D reconstruction of moving scenes from SPAD data.
arXiv Detail & Related papers (2024-12-12T16:33:06Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Omni-Line-of-Sight Imaging for Holistic Shape Reconstruction [45.955894009809185]
We introduce Omni-LOS, a neural computational imaging method for conducting holistic shape reconstruction (HSR) of complex objects.
Our method enables new capabilities to reconstruct near-$360circ$ surrounding geometry of an object from a single scan spot.
arXiv Detail & Related papers (2023-04-21T07:12:41Z) - Role of Transients in Two-Bounce Non-Line-of-Sight Imaging [24.7311033930968]
Non-line-of-sight (NLOS) imaging is to image objects occluded from the camera's field of view using multiply scattered light.
Recent works have demonstrated the feasibility of two-bounce (2B) NLOS imaging by scanning a laser and measuring cast shadows of occluded objects in scenes with two relay surfaces.
arXiv Detail & Related papers (2023-04-03T19:15:21Z) - Few-shot Non-line-of-sight Imaging with Signal-surface Collaborative
Regularization [18.466941045530408]
Non-line-of-sight imaging technique aims to reconstruct targets from multiply reflected light.
We propose a signal-surface collaborative regularization framework that provides noise-robust reconstructions with a minimal number of measurements.
Our approach has great potential in real-time non-line-of-sight imaging applications such as rescue operations and autonomous driving.
arXiv Detail & Related papers (2022-11-21T11:19:20Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.