Blur Aware Calibration of Multi-Focus Plenoptic Camera
- URL: http://arxiv.org/abs/2004.07745v1
- Date: Thu, 16 Apr 2020 16:29:34 GMT
- Title: Blur Aware Calibration of Multi-Focus Plenoptic Camera
- Authors: Mathieu Labussi\`ere, C\'eline Teuli\`ere, Fr\'ed\'eric Bernardin,
Omar Ait-Aider
- Abstract summary: This paper presents a novel calibration algorithm for Multi-Focus Plenoptic Cameras (Cs) using raw images only.
Considering blur information, we propose a new Blur Aware Plenoptic (BAP) feature.
The effectiveness of our calibration method is validated by quantitative and qualitative experiments.
- Score: 7.57024681220677
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel calibration algorithm for Multi-Focus Plenoptic
Cameras (MFPCs) using raw images only. The design of such cameras is usually
complex and relies on precise placement of optic elements. Several calibration
procedures have been proposed to retrieve the camera parameters but relying on
simplified models, reconstructed images to extract features, or multiple
calibrations when several types of micro-lens are used. Considering blur
information, we propose a new Blur Aware Plenoptic (BAP) feature. It is first
exploited in a pre-calibration step that retrieves initial camera parameters,
and secondly to express a new cost function for our single optimization
process. The effectiveness of our calibration method is validated by
quantitative and qualitative experiments.
Related papers
- EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Deep Learning for Camera Calibration and Beyond: A Survey [100.75060862015945]
Camera calibration involves estimating camera parameters to infer geometric features from captured sequences.
Recent efforts show that learning-based solutions have the potential to be used in place of the repeatability works of manual calibrations.
arXiv Detail & Related papers (2023-03-19T04:00:05Z) - Online Marker-free Extrinsic Camera Calibration using Person Keypoint
Detections [25.393382192511716]
We propose a marker-free online method for the extrinsic calibration of multiple smart edge sensors.
Our method assumes the intrinsic camera parameters to be known and requires priming with a rough initial estimate of the camera poses.
We show that the calibration with our method achieves lower reprojection errors compared to a reference calibration generated by an offline method.
arXiv Detail & Related papers (2022-09-15T15:54:21Z) - Leveraging blur information for plenoptic camera calibration [6.0982543764998995]
This paper presents a novel calibration algorithm for plenoptic cameras, especially the multi-focus configuration.
In the multi-focus configuration, the same part of a scene will demonstrate different amounts of blur according to the micro-lens focal length.
Usually, only micro-images with the smallest amount of blur are used.
We propose to explicitly model the defocus blur in a new camera model with the help of our newly introduced Blur Aware Plenoptic feature.
arXiv Detail & Related papers (2021-11-09T16:07:07Z) - Dynamic Event Camera Calibration [27.852239869987947]
We present the first dynamic event camera calibration algorithm.
It calibrates directly from events captured during relative motion between camera and calibration pattern.
As demonstrated through our results, the obtained calibration method is highly convenient and reliably calibrates from data sequences spanning less than 10 seconds.
arXiv Detail & Related papers (2021-07-14T14:52:58Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Zero-Shot Calibration of Fisheye Cameras [0.010956300138340428]
The proposed method estimates camera parameters from the horizontal and vertical field of view information of the camera without any image acquisition.
The method is particularly useful for wide-angle or fisheye cameras that have large image distortion.
arXiv Detail & Related papers (2020-11-30T08:10:24Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Superaccurate Camera Calibration via Inverse Rendering [0.19336815376402716]
We propose a new method for camera calibration using the principle of inverse rendering.
Instead of relying solely on detected feature points, we use an estimate of the internal parameters and the pose of the calibration object to implicitly render a non-photorealistic equivalent of the optical features.
arXiv Detail & Related papers (2020-03-20T10:26:16Z) - Redesigning SLAM for Arbitrary Multi-Camera Systems [51.81798192085111]
Adding more cameras to SLAM systems improves robustness and accuracy but complicates the design of the visual front-end significantly.
In this work, we aim at an adaptive SLAM system that works for arbitrary multi-camera setups.
We adapt a state-of-the-art visual-inertial odometry with these modifications, and experimental results show that the modified pipeline can adapt to a wide range of camera setups.
arXiv Detail & Related papers (2020-03-04T11:44:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.