TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic
Point-Spread-Functions
- URL: http://arxiv.org/abs/2303.17583v1
- Date: Thu, 30 Mar 2023 17:51:07 GMT
- Title: TiDy-PSFs: Computational Imaging with Time-Averaged Dynamic
Point-Spread-Functions
- Authors: Sachin Shah, Sakshum Kulshrestha, Christopher A. Metzler
- Abstract summary: Point-spread-function (PSF) engineering is a powerful computational imaging techniques wherein a custom phase mask is integrated into an optical system to encode additional information into captured images.
Inspired by recent advances in spatial light modulator (SLM) technology, this paper answers a natural question: Can one encode additional information and achieve superior performance by changing a phase mask dynamically?
We demonstrate, in simulation, that time-averaged dynamic (TiDy) phase masks can offer substantially improved monocular depth estimation and extended depth-of-field imaging performance.
- Score: 10.098114696565865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Point-spread-function (PSF) engineering is a powerful computational imaging
techniques wherein a custom phase mask is integrated into an optical system to
encode additional information into captured images. Used in combination with
deep learning, such systems now offer state-of-the-art performance at monocular
depth estimation, extended depth-of-field imaging, lensless imaging, and other
tasks. Inspired by recent advances in spatial light modulator (SLM) technology,
this paper answers a natural question: Can one encode additional information
and achieve superior performance by changing a phase mask dynamically over
time? We first prove that the set of PSFs described by static phase masks is
non-convex and that, as a result, time-averaged PSFs generated by dynamic phase
masks are fundamentally more expressive. We then demonstrate, in simulation,
that time-averaged dynamic (TiDy) phase masks can offer substantially improved
monocular depth estimation and extended depth-of-field imaging performance.
Related papers
- CodedEvents: Optimal Point-Spread-Function Engineering for 3D-Tracking with Event Cameras [12.329357178025205]
Point-spread-function (PSF) engineering is a well-established computational imaging technique.
We show that existing Fisher phase masks are already near-optimal for localizing static point sources.
We then demonstrate that existing designs are suboptimal for tracking point sources.
arXiv Detail & Related papers (2024-06-13T17:59:46Z) - SelfPromer: Self-Prompt Dehazing Transformers with Depth-Consistency [51.92434113232977]
This work presents an effective depth-consistency self-prompt Transformer for image dehazing.
It is motivated by an observation that the estimated depths of an image with haze residuals and its clear counterpart vary.
By incorporating the prompt, prompt embedding, and prompt attention into an encoder-decoder network based on VQGAN, we can achieve better perception quality.
arXiv Detail & Related papers (2023-03-13T11:47:24Z) - Robust Dynamic Radiance Fields [79.43526586134163]
Dynamic radiance field reconstruction methods aim to model the time-varying structure and appearance of a dynamic scene.
Existing methods, however, assume that accurate camera poses can be reliably estimated by Structure from Motion (SfM) algorithms.
We address this robustness issue by jointly estimating the static and dynamic radiance fields along with the camera parameters.
arXiv Detail & Related papers (2023-01-05T18:59:51Z) - End-to-end Learning for Joint Depth and Image Reconstruction from
Diffracted Rotation [10.896567381206715]
We propose a novel end-to-end learning approach for depth from diffracted rotation.
Our approach requires a significantly less complex model and less training data, yet it is superior to existing methods in the task of monocular depth estimation.
arXiv Detail & Related papers (2022-04-14T16:14:37Z) - Uncertainty-Aware Deep Multi-View Photometric Stereo [100.97116470055273]
Photometric stereo (PS) is excellent at recovering high-frequency surface details, whereas multi-view stereo (MVS) can help remove the low-frequency distortion due to PS and retain the global shape.
This paper proposes an approach that can effectively utilize such complementary strengths of PS and MVS.
We estimate per-pixel surface normals and depth using an uncertainty-aware deep-PS network and deep-MVS network, respectively.
arXiv Detail & Related papers (2022-02-26T05:45:52Z) - Diffractive all-optical computing for quantitative phase imaging [0.0]
We demonstrate a diffractive QPI network that can synthesize the quantitative phase image of an object.
A diffractive QPI network is a specialized all-optical processor designed to perform a quantitative phase-to-intensity transformation.
arXiv Detail & Related papers (2022-01-22T05:28:44Z) - Mask-guided Spectral-wise Transformer for Efficient Hyperspectral Image
Reconstruction [127.20208645280438]
Hyperspectral image (HSI) reconstruction aims to recover the 3D spatial-spectral signal from a 2D measurement.
Modeling the inter-spectra interactions is beneficial for HSI reconstruction.
Mask-guided Spectral-wise Transformer (MST) proposes a novel framework for HSI reconstruction.
arXiv Detail & Related papers (2021-11-15T16:59:48Z) - CodedStereo: Learned Phase Masks for Large Depth-of-field Stereo [24.193656749401075]
Conventional stereo suffers from a fundamental trade-off between imaging volume and signal-to-noise ratio.
We propose a novel end-to-end learning-based technique to overcome this limitation.
We show a 6x increase in volume that can be imaged in simulation.
arXiv Detail & Related papers (2021-04-09T23:44:52Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Time-Multiplexed Coded Aperture Imaging: Learned Coded Aperture and
Pixel Exposures for Compressive Imaging Systems [56.154190098338965]
We show that our proposed time multiplexed coded aperture (TMCA) can be optimized end-to-end.
TMCA induces better coded snapshots enabling superior reconstructions in two different applications: compressive light field imaging and hyperspectral imaging.
This codification outperforms the state-of-the-art compressive imaging systems by more than 4dB in those applications.
arXiv Detail & Related papers (2021-04-06T22:42:34Z) - Learning Wavefront Coding for Extended Depth of Field Imaging [4.199844472131922]
Extended depth of field (EDoF) imaging is a challenging ill-posed problem.
We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element.
We demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging.
arXiv Detail & Related papers (2019-12-31T17:00:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.