Microlens array grid estimation, light field decoding, and calibration
- URL: http://arxiv.org/abs/1912.13298v1
- Date: Tue, 31 Dec 2019 13:27:13 GMT
- Title: Microlens array grid estimation, light field decoding, and calibration
- Authors: Maximilian Schambach and Fernando Puente Le\'on
- Abstract summary: We investigate multiple algorithms for microlens array grid estimation for microlens array-based light field cameras.
Explicitly taking into account natural and mechanical vignetting effects, we propose a new method for microlens array grid estimation.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We quantitatively investigate multiple algorithms for microlens array grid
estimation for microlens array-based light field cameras. Explicitly taking
into account natural and mechanical vignetting effects, we propose a new method
for microlens array grid estimation that outperforms the ones previously
discussed in the literature. To quantify the performance of the algorithms, we
propose an evaluation pipeline utilizing application-specific ray-traced white
images with known microlens positions. Using a large dataset of synthesized
white images, we thoroughly compare the performance of the different estimation
algorithms. As an example, we apply our results to the decoding and calibration
of light fields taken with a Lytro Illum camera. We observe that decoding as
well as calibration benefit from a more accurate, vignetting-aware grid
estimation, especially in peripheral subapertures of the light field.
Related papers
- Toward Efficient Visual Gyroscopes: Spherical Moments, Harmonics Filtering, and Masking Techniques for Spherical Camera Applications [83.8743080143778]
A visual gyroscope estimates camera rotation through images.
The integration of omnidirectional cameras, offering a larger field of view compared to traditional RGB cameras, has proven to yield more accurate and robust results.
Here, we address these challenges by introducing a novel visual gyroscope, which combines an Efficient Multi-Mask-Filter Rotation Estor and a Learning based optimization.
arXiv Detail & Related papers (2024-04-02T13:19:06Z) - SplitNeRF: Split Sum Approximation Neural Field for Joint Geometry,
Illumination, and Material Estimation [65.99344783327054]
We present a novel approach for digitizing real-world objects by estimating their geometry, material properties, and lighting.
Our method incorporates into Radiance Neural Field (NeRF) pipelines the split sum approximation used with image-based lighting for real-time physical-based rendering.
Our method is capable of attaining state-of-the-art relighting quality after only $sim1$ hour of training in a single NVIDIA A100 GPU.
arXiv Detail & Related papers (2023-11-28T10:36:36Z) - Ray Tracing-Guided Design of Plenoptic Cameras [1.1421942894219896]
The design of a plenoptic camera requires the combination of two dissimilar optical systems.
We present a method to calculate the remaining aperture, sensor and microlens array parameters under different sets of constraints.
Our ray tracing-based approach is shown to result in models outperforming their pendants generated with the commonly used paraxial approximations.
arXiv Detail & Related papers (2022-03-09T11:57:00Z) - MC-Blur: A Comprehensive Benchmark for Image Deblurring [127.6301230023318]
In most real-world images, blur is caused by different factors, e.g., motion and defocus.
We construct a new large-scale multi-cause image deblurring dataset (called MC-Blur)
Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios.
arXiv Detail & Related papers (2021-12-01T02:10:42Z) - Leveraging blur information for plenoptic camera calibration [6.0982543764998995]
This paper presents a novel calibration algorithm for plenoptic cameras, especially the multi-focus configuration.
In the multi-focus configuration, the same part of a scene will demonstrate different amounts of blur according to the micro-lens focal length.
Usually, only micro-images with the smallest amount of blur are used.
We propose to explicitly model the defocus blur in a new camera model with the help of our newly introduced Blur Aware Plenoptic feature.
arXiv Detail & Related papers (2021-11-09T16:07:07Z) - Calibrating LiDAR and Camera using Semantic Mutual information [8.40460868324361]
We propose an algorithm for automatic, targetless, extrinsic calibration of a LiDAR and camera system using semantic information.
We achieve this goal by maximizing mutual information (MI) of semantic information between sensors, leveraging a neural network to estimate semantic mutual information, and matrix exponential for calibration computation.
arXiv Detail & Related papers (2021-04-24T21:04:33Z) - Learned Block Iterative Shrinkage Thresholding Algorithm for
Photothermal Super Resolution Imaging [52.42007686600479]
We propose a learned block-sparse optimization approach using an iterative algorithm unfolded into a deep neural network.
We show the benefits of using a learned block iterative shrinkage thresholding algorithm that is able to learn the choice of regularization parameters.
arXiv Detail & Related papers (2020-12-07T09:27:16Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z) - PlenoptiCam v1.0: A light-field imaging framework [8.467466998915018]
Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications.
Key obstacle in composing light-fields from exposures taken by a plenoptic camera is to calibrate computationally, align and rearrange four-dimensional image data.
Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras.
arXiv Detail & Related papers (2020-10-14T09:23:18Z) - Adaptive LiDAR Sampling and Depth Completion using Ensemble Variance [12.633386045916444]
This work considers the problem of depth completion, with or without image data, where an algorithm may measure the depth of a prescribed limited number of pixels.
The algorithmic challenge is to choose pixel positions strategically and dynamically to maximally reduce overall depth estimation error.
This setting is realized in daytime or nighttime depth completion for autonomous vehicles with a programmable LiDAR.
arXiv Detail & Related papers (2020-07-27T19:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.