FoveaSPAD: Exploiting Depth Priors for Adaptive and Efficient Single-Photon 3D Imaging
- URL: http://arxiv.org/abs/2412.02052v1
- Date: Tue, 03 Dec 2024 00:20:01 GMT
- Title: FoveaSPAD: Exploiting Depth Priors for Adaptive and Efficient Single-Photon 3D Imaging
- Authors: Justin Folden, Atul Ingle, Sanjeev J. Koppal,
- Abstract summary: Single-photon avalanche diodes (SPADs) are an emerging image-sensing technology that offer many advantages such as extreme sensitivity and time resolution.
In this paper, we propose new algorithms and sensing policies that improve signal-to-noise ratio (SNR) and increase computing and memory efficiency.
- Score: 7.350208716861244
- License:
- Abstract: Fast, efficient, and accurate depth-sensing is important for safety-critical applications such as autonomous vehicles. Direct time-of-flight LiDAR has the potential to fulfill these demands, thanks to its ability to provide high-precision depth measurements at long standoff distances. While conventional LiDAR relies on avalanche photodiodes (APDs), single-photon avalanche diodes (SPADs) are an emerging image-sensing technology that offer many advantages such as extreme sensitivity and time resolution. In this paper, we remove the key challenges to widespread adoption of SPAD-based LiDARs: their susceptibility to ambient light and the large amount of raw photon data that must be processed to obtain in-pixel depth estimates. We propose new algorithms and sensing policies that improve signal-to-noise ratio (SNR) and increase computing and memory efficiency for SPAD-based LiDARs. During capture, we use external signals to \emph{foveate}, i.e., guide how the SPAD system estimates scene depths. This foveated approach allows our method to ``zoom into'' the signal of interest, reducing the amount of raw photon data that needs to be stored and transferred from the SPAD sensor, while also improving resilience to ambient light. We show results both in simulation and also with real hardware emulation, with specific implementations achieving a 1548-fold reduction in memory usage, and our algorithms can be applied to newly available and future SPAD arrays.
Related papers
- A Plug-and-Play Algorithm for 3D Video Super-Resolution of Single-Photon LiDAR data [5.378429123269604]
Single-photon avalanche diodes (SPADs) are advanced sensors capable of detecting individual photons and recording their arrival times with picosecond resolution.
We propose a novel computational imaging algorithm to improve the 3D reconstruction of moving scenes from SPAD data.
arXiv Detail & Related papers (2024-12-12T16:33:06Z) - bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Photon Inhibition for Energy-Efficient Single-Photon Imaging [19.816230454712585]
Single-photon cameras (SPCs) are emerging as sensors of choice for challenging imaging applications.
Yet, single-photon sensitivity in SPADs comes at a cost -- each photon detection consumes more energy than that of a CMOS camera.
We propose a computational-imaging approach called emphphoton inhibition to address this challenge.
arXiv Detail & Related papers (2024-09-26T23:19:44Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Improving Lens Flare Removal with General Purpose Pipeline and Multiple
Light Sources Recovery [69.71080926778413]
flare artifacts can affect image visual quality and downstream computer vision tasks.
Current methods do not consider automatic exposure and tone mapping in image signal processing pipeline.
We propose a solution to improve the performance of lens flare removal by revisiting the ISP and design a more reliable light sources recovery strategy.
arXiv Detail & Related papers (2023-08-31T04:58:17Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Deep Learning and Image Super-Resolution-Guided Beam and Power
Allocation for mmWave Networks [80.37827344656048]
We develop a deep learning (DL)-guided hybrid beam and power allocation approach for millimeter-wave (mmWave) networks.
We exploit the synergy of supervised learning and super-resolution technology to enable low-overhead beam- and power allocation.
arXiv Detail & Related papers (2023-05-08T05:40:54Z) - Video super-resolution for single-photon LIDAR [0.0]
3D Time-of-Flight (ToF) image sensors are used widely in applications such as self-driving cars, Augmented Reality (AR) and robotics.
In this paper, we use synthetic depth sequences to train a 3D Convolutional Neural Network (CNN) for denoising and upscaling (x4) depth data.
With GPU acceleration, frames are processed at >30 frames per second, making the approach suitable for low-latency imaging, as required for obstacle avoidance.
arXiv Detail & Related papers (2022-10-19T11:33:29Z) - Simulating single-photon detector array sensors for depth imaging [2.497104612216142]
Single-Photon Avalanche Detector (SPAD) arrays are a rapidly emerging technology.
We establish a robust yet simple numerical procedure that establishes the fundamental limits to depth imaging with SPAD arrays.
arXiv Detail & Related papers (2022-10-07T13:23:34Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - High-speed object detection with a single-photon time-of-flight image
sensor [2.648554238948439]
We present results from a portable SPAD camera system that outputs 16-bin photon timing histograms with 64x32 spatial resolution.
The results are relevant for safety-critical computer vision applications which would benefit from faster than human reaction times.
arXiv Detail & Related papers (2021-07-28T14:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.