Large-scale single-photon imaging
- URL: http://arxiv.org/abs/2212.13654v1
- Date: Wed, 28 Dec 2022 00:38:04 GMT
- Title: Large-scale single-photon imaging
- Authors: Liheng Bian, Haoze Song, Lintao Peng, Xuyang Chang, Xi Yang, Roarke
Horstmeyer, Lin Ye, Tong Qin, Dezhi Zheng, Jun Zhang
- Abstract summary: Single-photon avalanche diode (SPAD) array has been widely applied in various fields such as fluorescence lifetime imaging and quantum computing.
However, large-scale high-fidelity single-photon imaging remains a big challenge, due to the complex hardware manufacture craft and heavy noise disturbance of SPAD arrays.
We introduce deep learning into SPAD, enabling super-resolution single-photon imaging over an order of magnitude, with significant enhancement of bit depth and imaging quality.
- Score: 10.210597636941937
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Benefiting from its single-photon sensitivity, single-photon avalanche diode
(SPAD) array has been widely applied in various fields such as fluorescence
lifetime imaging and quantum computing. However, large-scale high-fidelity
single-photon imaging remains a big challenge, due to the complex hardware
manufacture craft and heavy noise disturbance of SPAD arrays. In this work, we
introduce deep learning into SPAD, enabling super-resolution single-photon
imaging over an order of magnitude, with significant enhancement of bit depth
and imaging quality. We first studied the complex photon flow model of SPAD
electronics to accurately characterize multiple physical noise sources, and
collected a real SPAD image dataset (64 $\times$ 32 pixels, 90 scenes, 10
different bit depth, 3 different illumination flux, 2790 images in total) to
calibrate noise model parameters. With this real-world physical noise model, we
for the first time synthesized a large-scale realistic single-photon image
dataset (image pairs of 5 different resolutions with maximum megapixels, 17250
scenes, 10 different bit depth, 3 different illumination flux, 2.6 million
images in total) for subsequent network training. To tackle the severe
super-resolution challenge of SPAD inputs with low bit depth, low resolution,
and heavy noise, we further built a deep transformer network with a
content-adaptive self-attention mechanism and gated fusion modules, which can
dig global contextual features to remove multi-source noise and extract
full-frequency details. We applied the technique on a series of experiments
including macroscopic and microscopic imaging, microfluidic inspection, and
Fourier ptychography. The experiments validate the technique's state-of-the-art
super-resolution SPAD imaging performance, with more than 5 dB superiority on
PSNR compared to the existing methods.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Phase Guided Light Field for Spatial-Depth High Resolution 3D Imaging [36.208109063579066]
On 3D imaging, light field cameras typically are of single shot, and they heavily suffer from low spatial resolution and depth accuracy.
We propose a phase guided light field algorithm to significantly improve both the spatial and depth resolutions for off-the-shelf light field cameras.
arXiv Detail & Related papers (2023-11-17T15:08:15Z) - Passive superresolution imaging of incoherent objects [63.942632088208505]
Method consists of measuring the field's spatial mode components in the image plane in the overcomplete basis of Hermite-Gaussian modes and their superpositions.
Deep neural network is used to reconstruct the object from these measurements.
arXiv Detail & Related papers (2023-04-19T15:53:09Z) - Simulating single-photon detector array sensors for depth imaging [2.497104612216142]
Single-Photon Avalanche Detector (SPAD) arrays are a rapidly emerging technology.
We establish a robust yet simple numerical procedure that establishes the fundamental limits to depth imaging with SPAD arrays.
arXiv Detail & Related papers (2022-10-07T13:23:34Z) - Decoupled-and-Coupled Networks: Self-Supervised Hyperspectral Image
Super-Resolution with Subpixel Fusion [67.35540259040806]
We propose a subpixel-level HS super-resolution framework by devising a novel decoupled-and-coupled network, called DCNet.
As the name suggests, DC-Net first decouples the input into common (or cross-sensor) and sensor-specific components.
We append a self-supervised learning module behind the CSU net by guaranteeing the material consistency to enhance the detailed appearances of the restored HS product.
arXiv Detail & Related papers (2022-05-07T23:40:36Z) - Robust photon-efficient imaging using a pixel-wise residual shrinkage
network [7.557893223548758]
Single-photon light detection and ranging (LiDAR) has been widely applied to 3D imaging in challenging scenarios.
limited signal photon counts and high noises in the collected data have posed great challenges for predicting the depth image precisely.
We propose a pixel-wise residual shrinkage network for photon-efficient imaging from high-noise data.
arXiv Detail & Related papers (2022-01-05T05:08:12Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Robust super-resolution depth imaging via a multi-feature fusion deep
network [2.351601888896043]
Light detection and ranging (LIDAR) via single-photon sensitive detector (SPAD) arrays is an emerging technology that enables the acquisition of depth images at high frame rates.
We develop a deep network built specifically to take advantage of the multiple features that can be extracted from a camera's histogram data.
We apply the network to a range of 3D data, demonstrating denoising and a four-fold resolution enhancement of depth.
arXiv Detail & Related papers (2020-11-20T14:24:12Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Quanta Burst Photography [15.722085082004934]
Single-photon avalanche diodes (SPADs) are an emerging sensor technology capable of detecting individual incident photons.
We present quanta burst photography, a computational photography technique that leverages SPCs as passive imaging devices for photography in challenging conditions.
arXiv Detail & Related papers (2020-06-21T16:20:29Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.