Creating Realistic Ground Truth Data for the Evaluation of Calibration
Methods for Plenoptic and Conventional Cameras
- URL: http://arxiv.org/abs/2203.04661v1
- Date: Wed, 9 Mar 2022 11:58:00 GMT
- Title: Creating Realistic Ground Truth Data for the Evaluation of Calibration
Methods for Plenoptic and Conventional Cameras
- Authors: Tim Michels, Arne Petersen and Reinhard Koch
- Abstract summary: A meaningful evaluation of camera calibration methods relies on the availability of realistic synthetic data.
We propose a method based on backward ray tracing to create realistic ground truth data.
- Score: 0.9668407688201357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Camera calibration methods usually consist of capturing images of known
calibration patterns and using the detected correspondences to optimize the
parameters of the assumed camera model. A meaningful evaluation of these
methods relies on the availability of realistic synthetic data. In previous
works concerned with conventional cameras the synthetic data was mainly created
by rendering perfect images with a pinhole camera and subsequently adding
distortions and aberrations to the renderings and correspondences according to
the assumed camera model. This method can bias the evaluation since not every
camera perfectly complies with an assumed model. Furthermore, in the field of
plenoptic camera calibration there is no synthetic ground truth data available
at all. We address these problems by proposing a method based on backward ray
tracing to create realistic ground truth data that can be used for an unbiased
evaluation of calibration methods for both types of cameras.
Related papers
- Single-image camera calibration with model-free distortion correction [0.0]
This paper proposes a method for estimating the complete set of calibration parameters from a single image of a planar speckle pattern covering the entire sensor.
The correspondence between image points and physical points on the calibration target is obtained using Digital Image Correlation.
At the end of the procedure, a dense and uniform model-free distortion map is obtained over the entire image.
arXiv Detail & Related papers (2024-03-02T16:51:35Z) - Cameras as Rays: Pose Estimation via Ray Diffusion [54.098613859015856]
Estimating camera poses is a fundamental task for 3D reconstruction and remains challenging given sparsely sampled views.
We propose a distributed representation of camera pose that treats a camera as a bundle of rays.
Our proposed methods, both regression- and diffusion-based, demonstrate state-of-the-art performance on camera pose estimation on CO3D.
arXiv Detail & Related papers (2024-02-22T18:59:56Z) - The Drunkard's Odometry: Estimating Camera Motion in Deforming Scenes [79.00228778543553]
This dataset is the first large set of exploratory camera trajectories with ground truth inside 3D scenes.
Simulations in realistic 3D buildings lets us obtain a vast amount of data and ground truth labels.
We present a novel deformable odometry method, dubbed the Drunkard's Odometry, which decomposes optical flow estimates into rigid-body camera motion.
arXiv Detail & Related papers (2023-06-29T13:09:31Z) - Deep Learning for Camera Calibration and Beyond: A Survey [100.75060862015945]
Camera calibration involves estimating camera parameters to infer geometric features from captured sequences.
Recent efforts show that learning-based solutions have the potential to be used in place of the repeatability works of manual calibrations.
arXiv Detail & Related papers (2023-03-19T04:00:05Z) - Self-Supervised Camera Self-Calibration from Video [34.35533943247917]
We propose a learning algorithm to regress per-sequence calibration parameters using an efficient family of general camera models.
Our procedure achieves self-calibration results with sub-pixel reprojection error, outperforming other learning-based methods.
arXiv Detail & Related papers (2021-12-06T19:42:05Z) - Rethinking Generic Camera Models for Deep Single Image Camera
Calibration to Recover Rotation and Fisheye Distortion [8.877834897951578]
We propose a generic camera model that has the potential to address various types of distortion.
Our proposed method outperformed conventional methods on two largescale datasets and images captured by off-the-shelf fisheye cameras.
arXiv Detail & Related papers (2021-11-25T05:58:23Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Zero-Shot Calibration of Fisheye Cameras [0.010956300138340428]
The proposed method estimates camera parameters from the horizontal and vertical field of view information of the camera without any image acquisition.
The method is particularly useful for wide-angle or fisheye cameras that have large image distortion.
arXiv Detail & Related papers (2020-11-30T08:10:24Z) - Wide-angle Image Rectification: A Survey [86.36118799330802]
wide-angle images contain distortions that violate the assumptions underlying pinhole camera models.
Image rectification, which aims to correct these distortions, can solve these problems.
We present a detailed description and discussion of the camera models used in different approaches.
Next, we review both traditional geometry-based image rectification methods and deep learning-based methods.
arXiv Detail & Related papers (2020-10-30T17:28:40Z) - Superaccurate Camera Calibration via Inverse Rendering [0.19336815376402716]
We propose a new method for camera calibration using the principle of inverse rendering.
Instead of relying solely on detected feature points, we use an estimate of the internal parameters and the pose of the calibration object to implicitly render a non-photorealistic equivalent of the optical features.
arXiv Detail & Related papers (2020-03-20T10:26:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.