Ray Tracing-Guided Design of Plenoptic Cameras
- URL: http://arxiv.org/abs/2203.04660v1
- Date: Wed, 9 Mar 2022 11:57:00 GMT
- Title: Ray Tracing-Guided Design of Plenoptic Cameras
- Authors: Tim Michels and Reinhard Koch
- Abstract summary: The design of a plenoptic camera requires the combination of two dissimilar optical systems.
We present a method to calculate the remaining aperture, sensor and microlens array parameters under different sets of constraints.
Our ray tracing-based approach is shown to result in models outperforming their pendants generated with the commonly used paraxial approximations.
- Score: 1.1421942894219896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The design of a plenoptic camera requires the combination of two dissimilar
optical systems, namely a main lens and an array of microlenses. And while the
construction process of a conventional camera is mainly concerned with focusing
the image onto a single plane, in the case of plenoptic cameras there can be
additional requirements such as a predefined depth of field or a desired range
of disparities in neighboring microlens images. Due to this complexity, the
manual creation of multiple plenoptic camera setups is often a time-consuming
task. In this work we assume a simulation framework as well as the main lens
data given and present a method to calculate the remaining aperture, sensor and
microlens array parameters under different sets of constraints. Our ray
tracing-based approach is shown to result in models outperforming their
pendants generated with the commonly used paraxial approximations in terms of
image quality, while still meeting the desired constraints. Both the
implementation and evaluation setup including 30 plenoptic camera designs are
made publicly available.
Related papers
- Redundancy-Aware Camera Selection for Indoor Scene Neural Rendering [54.468355408388675]
We build a similarity matrix that incorporates both the spatial diversity of the cameras and the semantic variation of the images.
We apply a diversity-based sampling algorithm to optimize the camera selection.
We also develop a new dataset, IndoorTraj, which includes long and complex camera movements captured by humans in virtual indoor environments.
arXiv Detail & Related papers (2024-09-11T08:36:49Z) - Mind the Exit Pupil Gap: Revisiting the Intrinsics of a Standard Plenoptic Camera [0.8844616380849608]
We study the role of the main lens exit pupil in plenoptic camera (SPC) images.
We deduce the connection between the refocusing distance and the resampling parameter for the decoded light field.
We aim to contribute to a more accurate and nuanced understanding of plenoptic camera optics.
arXiv Detail & Related papers (2024-02-20T10:35:51Z) - Thin On-Sensor Nanophotonic Array Cameras [36.981384762023794]
We introduce emphflat nanophotonic computational cameras as an alternative to commodity cameras.
The optical array is embedded on a metasurface that, at 700nm height, is flat and sits on the sensor cover glass at 2.5mm focal distance from the sensor.
We reconstruct a megapixel image from our flat imager with a emphlearned probabilistic reconstruction method that employs a generative diffusion model to sample an implicit prior.
arXiv Detail & Related papers (2023-08-05T06:04:07Z) - Compound eye inspired flat lensless imaging with spatially-coded
Voronoi-Fresnel phase [32.914536774672925]
We report a lensless camera with spatially-coded Voronoi-Fresnel phase, partly inspired by biological apposition compound eye, to achieve superior image quality.
We demonstrate and verify the imaging performance with a prototype Voronoi-Fresnel lensless camera on a 1.6-megapixel image sensor in various illumination conditions.
arXiv Detail & Related papers (2021-09-28T13:13:58Z) - Calibrated and Partially Calibrated Semi-Generalized Homographies [65.29477277713205]
We propose the first minimal solutions for estimating the semi-generalized homography given a perspective and a generalized camera.
The proposed solvers are stable and efficient as demonstrated by a number of synthetic and real-world experiments.
arXiv Detail & Related papers (2021-03-11T08:56:24Z) - FlatNet: Towards Photorealistic Scene Reconstruction from Lensless
Measurements [31.353395064815892]
We propose a non-iterative deep learning based reconstruction approach that results in orders of magnitude improvement in image quality for lensless reconstructions.
Our approach, called $textitFlatNet$, lays down a framework for reconstructing high-quality photorealistic images from mask-based lensless cameras.
arXiv Detail & Related papers (2020-10-29T09:20:22Z) - Baseline and Triangulation Geometry in a Standard Plenoptic Camera [6.719751155411075]
We present a geometrical light field model allowing triangulation to be applied to a plenoptic camera.
It is shown that distance estimates from our novel method match those of real objects placed in front of the camera.
arXiv Detail & Related papers (2020-10-09T15:31:14Z) - Correlation Plenoptic Imaging between Arbitrary Planes [52.77024349608834]
We show that the protocol enables to change the focused planes, in post-processing, and to achieve an unprecedented combination of image resolution and depth of field.
Results lead the way towards the development of compact designs for correlation plenoptic imaging devices based on chaotic light, as well as high-SNR plenoptic imaging devices based on entangled photon illumination.
arXiv Detail & Related papers (2020-07-23T14:26:14Z) - DeProCams: Simultaneous Relighting, Compensation and Shape
Reconstruction for Projector-Camera Systems [91.45207885902786]
We propose a novel end-to-end trainable model named DeProCams to learn the photometric and geometric mappings of ProCams.
DeProCams explicitly decomposes the projector-camera image mappings into three subprocesses: shading attributes estimation, rough direct light estimation and photorealistic neural rendering.
In our experiments, DeProCams shows clear advantages over previous arts with promising quality and being fully differentiable.
arXiv Detail & Related papers (2020-03-06T05:49:16Z) - Redesigning SLAM for Arbitrary Multi-Camera Systems [51.81798192085111]
Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly.
In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups.
We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups.
arXiv Detail & Related papers (2020-03-04T11:44:42Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.