CodedEvents: Optimal Point-Spread-Function Engineering for 3D-Tracking with Event Cameras
- URL: http://arxiv.org/abs/2406.09409v1
- Date: Thu, 13 Jun 2024 17:59:46 GMT
- Title: CodedEvents: Optimal Point-Spread-Function Engineering for 3D-Tracking with Event Cameras
- Authors: Sachin Shah, Matthew Albert Chan, Haoming Cai, Jingxi Chen, Sakshum Kulshrestha, Chahat Deep Singh, Yiannis Aloimonos, Christopher Metzler,
- Abstract summary: Point-spread-function (PSF) engineering is a well-established computational imaging technique.
We show that existing Fisher phase masks are already near-optimal for localizing static point sources.
We then demonstrate that existing designs are suboptimal for tracking point sources.
- Score: 12.329357178025205
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point-spread-function (PSF) engineering is a well-established computational imaging technique that uses phase masks and other optical elements to embed extra information (e.g., depth) into the images captured by conventional CMOS image sensors. To date, however, PSF-engineering has not been applied to neuromorphic event cameras; a powerful new image sensing technology that responds to changes in the log-intensity of light. This paper establishes theoretical limits (Cram\'er Rao bounds) on 3D point localization and tracking with PSF-engineered event cameras. Using these bounds, we first demonstrate that existing Fisher phase masks are already near-optimal for localizing static flashing point sources (e.g., blinking fluorescent molecules). We then demonstrate that existing designs are sub-optimal for tracking moving point sources and proceed to use our theory to design optimal phase masks and binary amplitude masks for this task. To overcome the non-convexity of the design problem, we leverage novel implicit neural representation based parameterizations of the phase and amplitude masks. We demonstrate the efficacy of our designs through extensive simulations. We also validate our method with a simple prototype.
Related papers
- Joint 3D Shape and Motion Estimation from Rolling Shutter Light-Field
Images [2.0277446818410994]
We propose an approach to address the problem of 3D reconstruction of scenes from a single image captured by a light-field camera equipped with a rolling shutter sensor.
Our method leverages the 3D information cues present in the light-field and the motion information provided by the rolling shutter effect.
We present a generic model for the imaging process of this sensor and a two-stage algorithm that minimizes the re-projection error.
arXiv Detail & Related papers (2023-11-02T15:08:18Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Real-Time Radiance Fields for Single-Image Portrait View Synthesis [85.32826349697972]
We present a one-shot method to infer and render a 3D representation from a single unposed image in real-time.
Given a single RGB input, our image encoder directly predicts a canonical triplane representation of a neural radiance field for 3D-aware novel view synthesis via volume rendering.
Our method is fast (24 fps) on consumer hardware, and produces higher quality results than strong GAN-inversion baselines that require test-time optimization.
arXiv Detail & Related papers (2023-05-03T17:56:01Z) - TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic
Point-Spread-Functions [10.098114696565865]
Point-spread-function (PSF) engineering is a powerful computational imaging techniques wherein a custom phase mask is integrated into an optical system to encode additional information into captured images.
Inspired by recent advances in spatial light modulator (SLM) technology, this paper answers a natural question: Can one encode additional information and achieve superior performance by changing a phase mask dynamically?
We demonstrate, in simulation, that time-averaged dynamic (TiDy) phase masks can offer substantially improved monocular depth estimation and extended depth-of-field imaging performance.
arXiv Detail & Related papers (2023-03-30T17:51:07Z) - LWGNet: Learned Wirtinger Gradients for Fourier Ptychographic Phase
Retrieval [14.588976801396576]
We propose a hybrid model-driven residual network that combines the knowledge of the forward imaging system with a deep data-driven network.
Unlike other conventional unrolling techniques, LWGNet uses fewer stages while performing at par or even better than existing traditional and deep learning techniques.
This improvement in performance for low-bit depth and low-cost sensors has the potential to bring down the cost of FPM imaging setup significantly.
arXiv Detail & Related papers (2022-08-08T17:22:54Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - PREF: Phasorial Embedding Fields for Compact Neural Representations [54.44527545923917]
We present a phasorial embedding field emphPREF as a compact representation to facilitate neural signal modeling and reconstruction tasks.
Our experiments show PREF-based neural signal processing technique is on par with the state-of-the-art in 2D image completion, 3D SDF surface regression, and 5D radiance field reconstruction.
arXiv Detail & Related papers (2022-05-26T17:43:03Z) - Mask-ToF: Learning Microlens Masks for Flying Pixel Correction in
Time-of-Flight Imaging [46.09238528698229]
We introduce Mask-ToF, a method to reduce flying pixels (FP) in time-of-flight (ToF) depth captures.
FPs are pervasive artifacts which occur around depth edges, where light paths from both an object and its background are integrated over the aperture.
Mask-ToF learns a microlens-level occlusion mask which effectively creates a custom-shaped sub-aperture for each sensor pixel.
We develop a differentiable ToF simulator to jointly train a convolutional neural network to decode this information and produce high-fidelity, low-FP depth reconstructions.
arXiv Detail & Related papers (2021-03-30T21:30:26Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Learning an optimal PSF-pair for ultra-dense 3D localization microscopy [33.20228745456316]
A long-standing challenge in multiple-particle-tracking is the accurate and precise 3D localization of individual particles at close proximity.
One established approach for snapshot 3D imaging is point-spread-function (PSF) engineering, in which the PSF is modified to encode the axial information.
Here we suggest using multiple PSFs simultaneously to help overcome this challenge, and investigate the problem of engineering multiple PSFs for dense 3D localization.
arXiv Detail & Related papers (2020-09-29T20:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.