Simulating single-photon detector array sensors for depth imaging
- URL: http://arxiv.org/abs/2210.05644v1
- Date: Fri, 7 Oct 2022 13:23:34 GMT
- Title: Simulating single-photon detector array sensors for depth imaging
- Authors: Stirling Scholes, Germ\'an Mora-Mart\'in, Feng Zhu, Istvan Gyongy,
Phil Soan, and Jonathan Leach
- Abstract summary: Single-Photon Avalanche Detector (SPAD) arrays are a rapidly emerging technology.
We establish a robust yet simple numerical procedure that establishes the fundamental limits to depth imaging with SPAD arrays.
- Score: 2.497104612216142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single-Photon Avalanche Detector (SPAD) arrays are a rapidly emerging
technology. These multi-pixel sensors have single-photon sensitivities and
pico-second temporal resolutions thus they can rapidly generate depth images
with millimeter precision. Such sensors are a key enabling technology for
future autonomous systems as they provide guidance and situational awareness.
However, to fully exploit the capabilities of SPAD array sensors, it is crucial
to establish the quality of depth images they are able to generate in a wide
range of scenarios. Given a particular optical system and a finite image
acquisition time, what is the best-case depth resolution and what are realistic
images generated by SPAD arrays? In this work, we establish a robust yet simple
numerical procedure that rapidly establishes the fundamental limits to depth
imaging with SPAD arrays under real world conditions. Our approach accurately
generates realistic depth images in a wide range of scenarios, allowing the
performance of an optical depth imaging system to be established without the
need for costly and laborious field testing. This procedure has applications in
object detection and tracking for autonomous systems and could be easily
extended to systems for underwater imaging or for imaging around corners.
Related papers
- MAROON: A Framework for the Joint Characterization of Near-Field High-Resolution Radar and Optical Depth Imaging Techniques [4.816237933371206]
We take on the unique challenge of characterizing depth imagers from both, the optical and radio-frequency domain.
We provide a comprehensive evaluation of their depth measurements with respect to distinct object materials, geometries, and object-to-sensor distances.
All object measurements will be made public in form of a multimodal dataset, called MAROON.
arXiv Detail & Related papers (2024-11-01T11:53:10Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Large-scale single-photon imaging [10.210597636941937]
Single-photon avalanche diode (SPAD) array has been widely applied in various fields such as fluorescence lifetime imaging and quantum computing.
However, large-scale high-fidelity single-photon imaging remains a big challenge, due to the complex hardware manufacture craft and heavy noise disturbance of SPAD arrays.
We introduce deep learning into SPAD, enabling super-resolution single-photon imaging over an order of magnitude, with significant enhancement of bit depth and imaging quality.
arXiv Detail & Related papers (2022-12-28T00:38:04Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - Thermal Image Processing via Physics-Inspired Deep Networks [21.094006629684376]
DeepIR combines physically accurate sensor modeling with deep network-based image representation.
DeepIR requires neither training data nor periodic ground-truth calibration with a known black body target.
Simulated and real data experiments demonstrate that DeepIR can perform high-quality non-uniformity correction with as few as three images.
arXiv Detail & Related papers (2021-08-18T04:57:48Z) - Removing Diffraction Image Artifacts in Under-Display Camera via Dynamic
Skip Connection Network [80.67717076541956]
Under-Display Camera (UDC) systems provide a true bezel-less and notch-free viewing experience on smartphones.
In a typical UDC system, the pixel array attenuates and diffracts the incident light on the camera, resulting in significant image quality degradation.
In this work, we aim to analyze and tackle the aforementioned degradation problems.
arXiv Detail & Related papers (2021-04-19T18:41:45Z) - Robust super-resolution depth imaging via a multi-feature fusion deep
network [2.351601888896043]
Light detection and ranging (LIDAR) via single-photon sensitive detector (SPAD) arrays is an emerging technology that enables the acquisition of depth images at high frame rates.
We develop a deep network built specifically to take advantage of the multiple features that can be extracted from a camera's histogram data.
We apply the network to a range of 3D data, demonstrating denoising and a four-fold resolution enhancement of depth.
arXiv Detail & Related papers (2020-11-20T14:24:12Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Quanta Burst Photography [15.722085082004934]
Single-photon avalanche diodes (SPADs) are an emerging sensor technology capable of detecting individual incident photons.
We present quanta burst photography, a computational photography technique that leverages SPCs as passive imaging devices for photography in challenging conditions.
arXiv Detail & Related papers (2020-06-21T16:20:29Z) - Depth Sensing Beyond LiDAR Range [84.19507822574568]
We propose a novel three-camera system that utilizes small field of view cameras.
Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration.
It can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances.
arXiv Detail & Related papers (2020-04-07T00:09:51Z) - Unlimited Resolution Image Generation with R2D2-GANs [69.90258455164513]
We present a novel simulation technique for generating high quality images of any predefined resolution.
This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission.
The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition.
arXiv Detail & Related papers (2020-03-02T17:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.