Single-Photon Structured Light
- URL: http://arxiv.org/abs/2204.05300v1
- Date: Mon, 11 Apr 2022 17:57:04 GMT
- Title: Single-Photon Structured Light
- Authors: Varun Sundar, Sizhuo Ma, Aswin C. Sankaranarayanan and Mohit Gupta
- Abstract summary: "Single-Photon Structured Light" works by sensing binary images that indicates the presence or absence of photon arrivals during each exposure.
We develop novel temporal sequences using error correction codes that are designed to be robust to short-range effects like projector and camera defocus.
Our lab prototype is capable of 3D imaging in challenging scenarios involving objects with extremely low albedo or undergoing fast motion.
- Score: 31.614032717665832
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a novel structured light technique that uses Single Photon
Avalanche Diode (SPAD) arrays to enable 3D scanning at high-frame rates and
low-light levels. This technique, called "Single-Photon Structured Light",
works by sensing binary images that indicates the presence or absence of photon
arrivals during each exposure; the SPAD array is used in conjunction with a
high-speed binary projector, with both devices operated at speeds as high as
20~kHz. The binary images that we acquire are heavily influenced by photon
noise and are easily corrupted by ambient sources of light. To address this, we
develop novel temporal sequences using error correction codes that are designed
to be robust to short-range effects like projector and camera defocus as well
as resolution mismatch between the two devices. Our lab prototype is capable of
3D imaging in challenging scenarios involving objects with extremely low albedo
or undergoing fast motion, as well as scenes under strong ambient illumination.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Flying with Photons: Rendering Novel Views of Propagating Light [37.06220870989172]
We present an imaging and neural rendering technique that seeks to synthesize videos of light propagating through a scene from novel, moving camera viewpoints.
Our approach relies on a new ultrafast imaging setup to capture a first-of-its kind, multi-viewpoint video dataset with pico-second-level temporal resolution.
arXiv Detail & Related papers (2024-04-09T17:48:52Z) - 3D-2D Neural Nets for Phase Retrieval in Noisy Interferometric Imaging [0.0]
We introduce a 3D-2D Phase Retrieval U-Net (PRUNe) that takes noisy and randomly phase-shifted interferograms as inputs, and outputs a single 2D phase image.
A 3D downsampling convolutional encoder captures correlations within and between frames to produce a 2D latent space, which is upsampled by a 2D decoder into a phase image.
We find PRUNe reconstructions consistently show more accurate and smooth reconstructions, with a x2.5 - 4 lower mean squared error at multiple signal-to-noise ratios for interferograms with low (
arXiv Detail & Related papers (2024-02-08T21:19:16Z) - Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging [36.208109063579066]
On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
arXiv Detail & Related papers (2023-11-17T15:08:15Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Video super-resolution for single-photon LIDAR [0.0]
3D Time-of-Flight (ToF) image sensors are used widely in applications such as self-driving cars, Augmented Reality (AR) and robotics.
In this paper, we use synthetic depth sequences to train a 3D Convolutional Neural Network (CNN) for denoising and upscaling (x4) depth data.
With GPU acceleration, frames are processed at >30 frames per second, making the approach suitable for low-latency imaging, as required for obstacle avoidance.
arXiv Detail & Related papers (2022-10-19T11:33:29Z) - Low-Light Video Enhancement with Synthetic Event Guidance [188.7256236851872]
We use synthetic events from multiple frames to guide the enhancement and restoration of low-light videos.
Our method outperforms existing low-light video or single image enhancement approaches on both synthetic and real LLVE datasets.
arXiv Detail & Related papers (2022-08-23T14:58:29Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - TimeLens: Event-based Video Frame Interpolation [54.28139783383213]
We introduce Time Lens, a novel indicates equal contribution method that leverages the advantages of both synthesis-based and flow-based approaches.
We show an up to 5.21 dB improvement in terms of PSNR over state-of-the-art frame-based and event-based methods.
arXiv Detail & Related papers (2021-06-14T10:33:47Z) - Quanta Burst Photography [15.722085082004934]
Single-photon avalanche diodes (SPADs) are an emerging sensor technology capable of detecting individual incident photons.
We present quanta burst photography, a computational photography technique that leverages SPCs as passive imaging devices for photography in challenging conditions.
arXiv Detail & Related papers (2020-06-21T16:20:29Z) - Lightweight Multi-View 3D Pose Estimation through Camera-Disentangled
Representation [57.11299763566534]
We present a solution to recover 3D pose from multi-view images captured with spatially calibrated cameras.
We exploit 3D geometry to fuse input images into a unified latent representation of pose, which is disentangled from camera view-points.
Our architecture then conditions the learned representation on camera projection operators to produce accurate per-view 2d detections.
arXiv Detail & Related papers (2020-04-05T12:52:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.