Count-Free Single-Photon 3D Imaging with Race Logic
- URL: http://arxiv.org/abs/2307.04924v1
- Date: Mon, 10 Jul 2023 22:17:59 GMT
- Title: Count-Free Single-Photon 3D Imaging with Race Logic
- Authors: Atul Ingle and David Maier
- Abstract summary: A single-photon 3D camera determines the round-trip time of a laser pulse by capturing the arrival of individual photons at each camera pixel.
In-pixel histogram processing is computationally expensive and requires large amount of memory per pixel.
Here we present an online approach for distance estimation without explicitly storing photon counts.
- Score: 6.204834501774316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single-photon cameras (SPCs) have emerged as a promising technology for
high-resolution 3D imaging. A single-photon 3D camera determines the round-trip
time of a laser pulse by capturing the arrival of individual photons at each
camera pixel. Constructing photon-timestamp histograms is a fundamental
operation for a single-photon 3D camera. However, in-pixel histogram processing
is computationally expensive and requires large amount of memory per pixel.
Digitizing and transferring photon timestamps to an off-sensor histogramming
module is bandwidth and power hungry. Here we present an online approach for
distance estimation without explicitly storing photon counts. The two key
ingredients of our approach are (a) processing photon streams using race logic,
which maintains photon data in the time-delay domain, and (b) constructing
count-free equi-depth histograms. Equi-depth histograms are a succinct
representation for ``peaky'' distributions, such as those obtained by an SPC
pixel from a laser pulse reflected by a surface. Our approach uses a binner
element that converges on the median (or, more generally, to another quantile)
of a distribution. We cascade multiple binners to form an equi-depth
histogrammer that produces multi-bin histograms. Our evaluation shows that this
method can provide an order of magnitude reduction in bandwidth and power
consumption while maintaining similar distance reconstruction accuracy as
conventional processing methods.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Single-Photon 3D Imaging with Equi-Depth Photon Histograms [4.432168053497992]
Single-photon 3D cameras estimate the round-trip time of a laser pulse by forming equi-width (EW) histograms of detected photon timestamps.
EW histograms require high bandwidth and in-pixel memory, making SPCs less attractive in resource-constrained settings.
We propose a 3D sensing technique based on equi-depth (ED) histograms.
arXiv Detail & Related papers (2024-08-28T22:02:38Z) - Deep Richardson-Lucy Deconvolution for Low-Light Image Deblurring [48.80983873199214]
We develop a data-driven approach to model the saturated pixels by a learned latent map.
Based on the new model, the non-blind deblurring task can be formulated into a maximum a posterior (MAP) problem.
To estimate high-quality deblurred images without amplified artifacts, we develop a prior estimation network.
arXiv Detail & Related papers (2023-08-10T12:53:30Z) - Real-Time Radiance Fields for Single-Image Portrait View Synthesis [85.32826349697972]
We present a one-shot method to infer and render a 3D representation from a single unposed image in real-time.
Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering.
Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization.
arXiv Detail & Related papers (2023-05-03T17:56:01Z) - Generative Multiplane Neural Radiance for 3D-Aware Image Generation [102.15322193381617]
We present a method to efficiently generate 3D-aware high-resolution images that are view-consistent across multiple target views.
Our GMNR model generates 3D-aware images of 1024 X 1024 pixels with 17.6 FPS on a single V100.
arXiv Detail & Related papers (2023-04-03T17:41:20Z) - $PC^2$: Projection-Conditioned Point Cloud Diffusion for Single-Image 3D
Reconstruction [97.06927852165464]
Reconstructing the 3D shape of an object from a single RGB image is a long-standing and highly challenging problem in computer vision.
We propose a novel method for single-image 3D reconstruction which generates a sparse point cloud via a conditional denoising diffusion process.
arXiv Detail & Related papers (2023-02-21T13:37:07Z) - Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data [80.14669385741202]
We propose a self-supervised pre-training method for 3D perception models tailored to autonomous driving data.
We leverage the availability of synchronized and calibrated image and Lidar sensors in autonomous driving setups.
Our method does not require any point cloud nor image annotations.
arXiv Detail & Related papers (2022-03-30T12:40:30Z) - A photosensor employing data-driven binning for ultrafast image
recognition [0.0]
Pixel binning is a technique widely used in optical image acquisition and spectroscopy.
Here, we push the concept of binning to its limit by combining a large fraction of the sensor elements into a single superpixel.
For a given pattern recognition task, its optimal shape is determined from training data using a machine learning algorithm.
arXiv Detail & Related papers (2021-11-20T15:38:39Z) - High-speed object detection with a single-photon time-of-flight image
sensor [2.648554238948439]
We present results from a portable SPAD camera system that outputs 16-bin photon timing histograms with 64x32 spatial resolution.
The results are relevant for safety-critical computer vision applications which would benefit from faster than human reaction times.
arXiv Detail & Related papers (2021-07-28T14:53:44Z) - Mesoscopic photogrammetry with an unstabilized phone camera [8.210210271599134]
We present a feature-free photogrammetric computation technique that enables quantitative 3D mesoscopic (mm-scale height variation) imaging.
Our end-to-end, pixel-intensity-based approach jointly registers and stitches all the images by estimating a coaligned height map.
We also propose strategies for reducing time and memory, applicable to other multi-frame registration problems.
arXiv Detail & Related papers (2020-12-11T00:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.