Mind the Exit Pupil Gap: Revisiting the Intrinsics of a Standard Plenoptic Camera
- URL: http://arxiv.org/abs/2402.12891v2
- Date: Fri, 5 Apr 2024 09:26:07 GMT
- Title: Mind the Exit Pupil Gap: Revisiting the Intrinsics of a Standard Plenoptic Camera
- Authors: Tim Michels, Daniel Mäckelmann, Reinhard Koch,
- Abstract summary: We study the role of the main lens exit pupil in plenoptic camera (SPC) images.
We deduce the connection between the refocusing distance and the resampling parameter for the decoded light field.
We aim to contribute to a more accurate and nuanced understanding of plenoptic camera optics.
- Score: 0.8844616380849608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Among the common applications of plenoptic cameras are depth reconstruction and post-shot refocusing. These require a calibration relating the camera-side light field to that of the scene. Numerous methods with this goal have been developed based on thin lens models for the plenoptic camera's main lens and microlenses. Our work addresses the often-overlooked role of the main lens exit pupil in these models and specifically in the decoding process of standard plenoptic camera (SPC) images. We formally deduce the connection between the refocusing distance and the resampling parameter for the decoded light field and provide an analysis of the errors that arise when the exit pupil is not considered. In addition, previous work is revisited with respect to the exit pupil's role and all theoretical results are validated through a ray-tracing-based simulation. With the public release of the evaluated SPC designs alongside our simulation and experimental data we aim to contribute to a more accurate and nuanced understanding of plenoptic camera optics.
Related papers
- Optical Aberration Correction in Postprocessing using Imaging Simulation [17.331939025195478]
The popularity of mobile photography continues to grow.
Recent cameras have shifted some of these correction tasks from optical design to postprocessing systems.
We propose a practical method for recovering the degradation caused by optical aberrations.
arXiv Detail & Related papers (2023-05-10T03:20:39Z) - A Geometric Model for Polarization Imaging on Projective Cameras [5.381004207943598]
We present a geometric model describing how a general projective camera captures the light polarization state.
Our model is implemented as a pre-processing operation acting on raw images, followed by a per-pixel rotation of the reconstructed normal field.
Experiments on existing and new datasets demonstrate the accuracy of the model when applied to commercially available polarimetric cameras.
arXiv Detail & Related papers (2022-11-29T17:12:26Z) - Learning rich optical embeddings for privacy-preserving lensless image
classification [17.169529483306103]
We exploit the unique multiplexing property of casting the optics as an encoder that produces learned embeddings directly at the camera sensor.
We do so in the context of image classification, where we jointly optimize the encoder's parameters and those of an image classifier in an end-to-end fashion.
Our experiments show that jointly learning the lensless optical encoder and the digital processing allows for lower resolution embeddings at the sensor, and hence better privacy as it is much harder to recover meaningful images from these measurements.
arXiv Detail & Related papers (2022-06-03T07:38:09Z) - Ray Tracing-Guided Design of Plenoptic Cameras [1.1421942894219896]
The design of a plenoptic camera requires the combination of two dissimilar optical systems.
We present a method to calculate the remaining aperture, sensor and microlens array parameters under different sets of constraints.
Our ray tracing-based approach is shown to result in models outperforming their pendants generated with the commonly used paraxial approximations.
arXiv Detail & Related papers (2022-03-09T11:57:00Z) - Unrolled Primal-Dual Networks for Lensless Cameras [0.45880283710344055]
We show that learning a supervised primal-dual reconstruction method results in image quality matching state of the art in the literature.
This improvement stems from our finding that embedding learnable forward and adjoint models in a learned primal-dual optimization framework can even improve the quality of reconstructed images.
arXiv Detail & Related papers (2022-03-08T19:21:39Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - A learning-based view extrapolation method for axial super-resolution [52.748944517480155]
Axial light field resolution refers to the ability to distinguish features at different depths by refocusing.
We propose a learning-based method to extrapolate novel views from axial volumes of sheared epipolar plane images.
arXiv Detail & Related papers (2021-03-11T07:22:13Z) - Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics [72.9038524082252]
We propose a compact single-shot monocular hyperspectral-depth (HS-D) imaging method.
Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum.
To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager.
arXiv Detail & Related papers (2020-09-01T14:19:35Z) - Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion [51.19260542887099]
We show that self-supervision can be used to learn accurate depth and ego-motion estimation without prior knowledge of the camera model.
Inspired by the geometric model of Grossberg and Nayar, we introduce Neural Ray Surfaces (NRS), convolutional networks that represent pixel-wise projection rays.
We demonstrate the use of NRS for self-supervised learning of visual odometry and depth estimation from raw videos obtained using a wide variety of camera systems.
arXiv Detail & Related papers (2020-08-15T02:29:13Z) - DeProCams: Simultaneous Relighting, Compensation and Shape
Reconstruction for Projector-Camera Systems [91.45207885902786]
We propose a novel end-to-end trainable model named DeProCams to learn the photometric and geometric mappings of ProCams.
DeProCams explicitly decomposes the projector-camera image mappings into three subprocesses: shading attributes estimation, rough direct light estimation and photorealistic neural rendering.
In our experiments, DeProCams shows clear advantages over previous arts with promising quality and being fully differentiable.
arXiv Detail & Related papers (2020-03-06T05:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.