LiFCal: Online Light Field Camera Calibration via Bundle Adjustment
- URL: http://arxiv.org/abs/2408.11682v1
- Date: Wed, 21 Aug 2024 15:04:49 GMT
- Title: LiFCal: Online Light Field Camera Calibration via Bundle Adjustment
- Authors: Aymeric Fleith, Doaa Ahmed, Daniel Cremers, Niclas Zeller,
- Abstract summary: LiFCal is an online calibration pipeline for MLA-based light field cameras.
It accurately determines model parameters from a moving camera sequence without precise calibration targets.
It can be applied in a target-free scene, and it is implemented online in a complete and continuous pipeline.
- Score: 38.2887165481751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose LiFCal, a novel geometric online calibration pipeline for MLA-based light field cameras. LiFCal accurately determines model parameters from a moving camera sequence without precise calibration targets, integrating arbitrary metric scaling constraints. It optimizes intrinsic parameters of the light field camera model, the 3D coordinates of a sparse set of scene points and camera poses in a single bundle adjustment defined directly on micro image points. We show that LiFCal can reliably and repeatably calibrate a focused plenoptic camera using different input sequences, providing intrinsic camera parameters extremely close to state-of-the-art methods, while offering two main advantages: it can be applied in a target-free scene, and it is implemented online in a complete and continuous pipeline. Furthermore, we demonstrate the quality of the obtained camera parameters in downstream tasks like depth estimation and SLAM. Webpage: https://lifcal.github.io/
Related papers
- Camera Calibration using a Collimator System [5.138012450471437]
This paper introduces a novel camera calibration method using a collimator system.
Based on the optical geometry of the collimator system, we prove that the relative motion between the target and camera conforms to the spherical motion model.
A closed-form solver for multiple views and a minimal solver for two views are proposed for camera calibration.
arXiv Detail & Related papers (2024-09-30T07:40:41Z) - SceneCalib: Automatic Targetless Calibration of Cameras and Lidars in
Autonomous Driving [10.517099201352414]
SceneCalib is a novel method for simultaneous self-calibration of extrinsic and intrinsic parameters in a system containing multiple cameras and a lidar sensor.
We resolve issues with a fully automatic method that requires no explicit correspondences between camera images and lidar point clouds.
arXiv Detail & Related papers (2023-04-11T23:02:16Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - Calibration and Auto-Refinement for Light Field Cameras [13.76996108304056]
This paper presents an approach for light field camera calibration and rectification, based on pairwise pattern-based parameters extraction.
It is followed by a correspondence-based algorithm for camera parameters refinement from arbitrary scenes using the triangulation filter and nonlinear optimization.
arXiv Detail & Related papers (2021-06-11T05:49:14Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - FLEX: Parameter-free Multi-view 3D Human Motion Reconstruction [70.09086274139504]
Multi-view algorithms strongly depend on camera parameters, in particular, the relative positions among the cameras.
We introduce FLEX, an end-to-end parameter-free multi-view model.
We demonstrate results on the Human3.6M and KTH Multi-view Football II datasets.
arXiv Detail & Related papers (2021-05-05T09:08:12Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Superaccurate Camera Calibration via Inverse Rendering [0.19336815376402716]
We propose a new method for camera calibration using the principle of inverse rendering.
Instead of relying solely on detected feature points, we use an estimate of the internal parameters and the pose of the calibration object to implicitly render a non-photorealistic equivalent of the optical features.
arXiv Detail & Related papers (2020-03-20T10:26:16Z) - Multi-View Photometric Stereo: A Robust Solution and Benchmark Dataset
for Spatially Varying Isotropic Materials [65.95928593628128]
We present a method to capture both 3D shape and spatially varying reflectance with a multi-view photometric stereo technique.
Our algorithm is suitable for perspective cameras and nearby point light sources.
arXiv Detail & Related papers (2020-01-18T12:26:22Z) - A Two-step Calibration Method for Unfocused Light Field Camera Based on
Projection Model Analysis [8.959346460518226]
The proposed method is able to reuse traditional camera calibration methods for the direction parameter set.
The accuracy and robustness of the proposed method outperforms its counterparts under various benchmark criteria.
arXiv Detail & Related papers (2020-01-11T10:37:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.