Role of Transients in Two-Bounce Non-Line-of-Sight Imaging
- URL: http://arxiv.org/abs/2304.01308v1
- Date: Mon, 3 Apr 2023 19:15:21 GMT
- Title: Role of Transients in Two-Bounce Non-Line-of-Sight Imaging
- Authors: Siddharth Somasundaram, Akshat Dave, Connor Henley, Ashok
Veeraraghavan, Ramesh Raskar
- Abstract summary: Non-line-of-sight (NLOS) imaging is to image objects occluded from the camera's field of view using multiply scattered light.
Recent works have demonstrated the feasibility of two-bounce (2B) NLOS imaging by scanning a laser and measuring cast shadows of occluded objects in scenes with two relay surfaces.
- Score: 24.7311033930968
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The goal of non-line-of-sight (NLOS) imaging is to image objects occluded
from the camera's field of view using multiply scattered light. Recent works
have demonstrated the feasibility of two-bounce (2B) NLOS imaging by scanning a
laser and measuring cast shadows of occluded objects in scenes with two relay
surfaces. In this work, we study the role of time-of-flight (ToF) measurements,
\ie transients, in 2B-NLOS under multiplexed illumination. Specifically, we
study how ToF information can reduce the number of measurements and spatial
resolution needed for shape reconstruction. We present our findings with
respect to tradeoffs in (1) temporal resolution, (2) spatial resolution, and
(3) number of image captures by studying SNR and recoverability as functions of
system parameters. This leads to a formal definition of the mathematical
constraints for 2B lidar. We believe that our work lays an analytical
groundwork for design of future NLOS imaging systems, especially as ToF sensors
become increasingly ubiquitous.
Related papers
- Transientangelo: Few-Viewpoint Surface Reconstruction Using Single-Photon Lidar [8.464054039931245]
Lidar captures 3D scene geometry by emitting pulses of light to a target and recording the speed-of-light time delay of the reflected light.
conventional lidar systems do not output the raw, captured waveforms of backscattered light.
We develop new regularization strategies that improve robustness to photon noise, enabling accurate surface reconstruction with as few as 10 photons per pixel.
arXiv Detail & Related papers (2024-08-22T08:12:09Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - SSIF: Learning Continuous Image Representation for Spatial-Spectral
Super-Resolution [73.46167948298041]
We propose a neural implicit model that represents an image as a function of both continuous pixel coordinates in the spatial domain and continuous wavelengths in the spectral domain.
We show that SSIF generalizes well to both unseen spatial resolutions and spectral resolutions.
It can generate high-resolution images that improve the performance of downstream tasks by 1.7%-7%.
arXiv Detail & Related papers (2023-09-30T15:23:30Z) - Omni-Line-of-Sight Imaging for Holistic Shape Reconstruction [45.955894009809185]
We introduce Omni-LOS, a neural computational imaging method for conducting holistic shape reconstruction (HSR) of complex objects.
Our method enables new capabilities to reconstruct near-$360circ$ surrounding geometry of an object from a single scan spot.
arXiv Detail & Related papers (2023-04-21T07:12:41Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Frequency-Aware Self-Supervised Monocular Depth Estimation [41.97188738587212]
We present two versatile methods to enhance self-supervised monocular depth estimation models.
The high generalizability of our methods is achieved by solving the fundamental and ubiquitous problems in photometric loss function.
We are the first to propose blurring images to improve depth estimators with an interpretable analysis.
arXiv Detail & Related papers (2022-10-11T14:30:26Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - LIF-Seg: LiDAR and Camera Image Fusion for 3D LiDAR Semantic
Segmentation [78.74202673902303]
We propose a coarse-tofine LiDAR and camera fusion-based network (termed as LIF-Seg) for LiDAR segmentation.
The proposed method fully utilizes the contextual information of images and introduces a simple but effective early-fusion strategy.
The cooperation of these two components leads to the success of the effective camera-LiDAR fusion.
arXiv Detail & Related papers (2021-08-17T08:53:11Z) - Spatially-Variant CNN-based Point Spread Function Estimation for Blind
Deconvolution and Depth Estimation in Optical Microscopy [6.09170287691728]
We present a method that improves the resolution of light microscopy images of thin, yet non-flat objects.
We estimate the parameters of a spatially-variant Point-Spread function (PSF) model using a Convolutional Neural Network (CNN)
Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions.
arXiv Detail & Related papers (2020-10-08T14:20:16Z) - Efficient Non-Line-of-Sight Imaging from Transient Sinograms [36.154873075911404]
Non-line-of-sight (NLOS) imaging techniques use light that diffusely reflects off of visible surfaces (e.g., walls) to see around corners.
One approach involves using pulsed lasers and ultrafast sensors to measure the travel time of multiply scattered light.
We propose a more efficient form of NLOS scanning that reduces both acquisition times and computational requirements.
arXiv Detail & Related papers (2020-08-06T17:50:50Z) - Unlimited Resolution Image Generation with R2D2-GANs [69.90258455164513]
We present a novel simulation technique for generating high quality images of any predefined resolution.
This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission.
The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition.
arXiv Detail & Related papers (2020-03-02T17:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.