Leveraging blur information for plenoptic camera calibration
- URL: http://arxiv.org/abs/2111.05226v1
- Date: Tue, 9 Nov 2021 16:07:07 GMT
- Title: Leveraging blur information for plenoptic camera calibration
- Authors: Mathieu Labussi\`ere, C\'eline Teuli\`ere, Fr\'ed\'eric Bernardin,
Omar Ait-Aider
- Abstract summary: This paper presents a novel calibration algorithm for plenoptic cameras, especially the multi-focus configuration.
In the multi-focus configuration, the same part of a scene will demonstrate different amounts of blur according to the micro-lens focal length.
Usually, only micro-images with the smallest amount of blur are used.
We propose to explicitly model the defocus blur in a new camera model with the help of our newly introduced Blur Aware Plenoptic feature.
- Score: 6.0982543764998995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel calibration algorithm for plenoptic cameras,
especially the multi-focus configuration, where several types of micro-lenses
are used, using raw images only. Current calibration methods rely on simplified
projection models, use features from reconstructed images, or require separated
calibrations for each type of micro-lens. In the multi-focus configuration, the
same part of a scene will demonstrate different amounts of blur according to
the micro-lens focal length. Usually, only micro-images with the smallest
amount of blur are used. In order to exploit all available data, we propose to
explicitly model the defocus blur in a new camera model with the help of our
newly introduced Blur Aware Plenoptic (BAP) feature. First, it is used in a
pre-calibration step that retrieves initial camera parameters, and second, to
express a new cost function to be minimized in our single optimization process.
Third, it is exploited to calibrate the relative blur between micro-images. It
links the geometric blur, i.e., the blur circle, to the physical blur, i.e.,
the point spread function. Finally, we use the resulting blur profile to
characterize the camera's depth of field. Quantitative evaluations in
controlled environment on real-world data demonstrate the effectiveness of our
calibrations.
Related papers
- LiFCal: Online Light Field Camera Calibration via Bundle Adjustment [38.2887165481751]
LiFCal is an online calibration pipeline for MLA-based light field cameras.
It accurately determines model parameters from a moving camera sequence without precise calibration targets.
It can be applied in a target-free scene, and it is implemented online in a complete and continuous pipeline.
arXiv Detail & Related papers (2024-08-21T15:04:49Z) - Single-image camera calibration with model-free distortion correction [0.0]
This paper proposes a method for estimating the complete set of calibration parameters from a single image of a planar speckle pattern covering the entire sensor.
The correspondence between image points and physical points on the calibration target is obtained using Digital Image Correlation.
At the end of the procedure, a dense and uniform model-free distortion map is obtained over the entire image.
arXiv Detail & Related papers (2024-03-02T16:51:35Z) - Fearless Luminance Adaptation: A Macro-Micro-Hierarchical Transformer
for Exposure Correction [65.5397271106534]
A single neural network is difficult to handle all exposure problems.
In particular, convolutions hinder the ability to restore faithful color or details on extremely over-/under- exposed regions.
We propose a Macro-Micro-Hierarchical transformer, which consists of a macro attention to capture long-range dependencies, a micro attention to extract local features, and a hierarchical structure for coarse-to-fine correction.
arXiv Detail & Related papers (2023-09-02T09:07:36Z) - MC-Blur: A Comprehensive Benchmark for Image Deblurring [127.6301230023318]
In most real-world images, blur is caused by different factors, e.g., motion and defocus.
We construct a new large-scale multi-cause image deblurring dataset (called MC-Blur)
Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios.
arXiv Detail & Related papers (2021-12-01T02:10:42Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Blur Aware Calibration of Multi-Focus Plenoptic Camera [7.57024681220677]
This paper presents a novel calibration algorithm for Multi-Focus Plenoptic Cameras (Cs) using raw images only.
Considering blur information, we propose a new Blur Aware Plenoptic (BAP) feature.
The effectiveness of our calibration method is validated by quantitative and qualitative experiments.
arXiv Detail & Related papers (2020-04-16T16:29:34Z) - Superaccurate Camera Calibration via Inverse Rendering [0.19336815376402716]
We propose a new method for camera calibration using the principle of inverse rendering.
Instead of relying solely on detected feature points, we use an estimate of the internal parameters and the pose of the calibration object to implicitly render a non-photorealistic equivalent of the optical features.
arXiv Detail & Related papers (2020-03-20T10:26:16Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z) - DeepFocus: a Few-Shot Microscope Slide Auto-Focus using a Sample
Invariant CNN-based Sharpness Function [6.09170287691728]
Autofocus (AF) methods are extensively used in biomicroscopy, for example to acquire timelapses.
Current hardware-based methods require modifying the microscope and image-based algorithms.
We propose DeepFocus, an AF method we implemented as a Micro-Manager plugin.
arXiv Detail & Related papers (2020-01-02T23:29:11Z) - Microlens array grid estimation, light field decoding, and calibration [77.34726150561087]
We investigate multiple algorithms for microlens array grid estimation for microlens array-based light field cameras.
Explicitly taking into account natural and mechanical vignetting effects, we propose a new method for microlens array grid estimation.
arXiv Detail & Related papers (2019-12-31T13:27:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.