Multi-camera calibration with pattern rigs, including for non-overlapping cameras: CALICO
- URL: http://arxiv.org/abs/1903.06811v3
- Date: Wed, 27 Mar 2024 20:03:41 GMT
- Title: Multi-camera calibration with pattern rigs, including for non-overlapping cameras: CALICO
- Authors: Amy Tabb, Henry Medeiros, Mitchell J. Feldmann, Thiago T. Santos,
- Abstract summary: This paper describes CALICO, a method for multi-camera calibration suitable for challenging contexts.
CalICO is a pattern-based approach, where the multi-calibration problem is formulated using rigidity constraints between patterns and cameras.
- Score: 1.8874301050354767
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes CALICO, a method for multi-camera calibration suitable for challenging contexts: stationary and mobile multi-camera systems, cameras without overlapping fields of view, and non-synchronized cameras. Recent approaches are roughly divided into infrastructure- and pattern-based. Infrastructure-based approaches use the scene's features to calibrate, while pattern-based approaches use calibration patterns. Infrastructure-based approaches are not suitable for stationary camera systems, and pattern-based approaches may constrain camera placement because shared fields of view or extremely large patterns are required. CALICO is a pattern-based approach, where the multi-calibration problem is formulated using rigidity constraints between patterns and cameras. We use a {\it pattern rig}: several patterns rigidly attached to each other or some structure. We express the calibration problem as that of algebraic and reprojection error minimization problems. Simulated and real experiments demonstrate the method in a variety of settings. CALICO compared favorably to Kalibr. Mean reconstruction accuracy error was $\le 0.71$ mm for real camera rigs, and $\le 1.11$ for simulated camera rigs. Code and data releases are available at \cite{tabb_amy_2019_3520866} and \url{https://github.com/amy-tabb/calico}.
Related papers
- eWand: A calibration framework for wide baseline frame-based and event-based camera systems [11.735290341808064]
We propose eWand, a new method that uses blinking LEDs inside opaque spheres instead of a printed or displayed pattern.
Our method provides a faster, easier-to-use extrinsic calibration approach that maintains high accuracy for both event- and frame-based cameras.
arXiv Detail & Related papers (2023-09-22T07:51:17Z) - SceneCalib: Automatic Targetless Calibration of Cameras and Lidars in
Autonomous Driving [10.517099201352414]
SceneCalib is a novel method for simultaneous self-calibration of extrinsic and intrinsic parameters in a system containing multiple cameras and a lidar sensor.
We resolve issues with a fully automatic method that requires no explicit correspondences between camera images and lidar point clouds.
arXiv Detail & Related papers (2023-04-11T23:02:16Z) - Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter
Correction [54.00007868515432]
Existing methods face challenges in estimating the accurate correction field due to the uniform velocity assumption.
We propose a geometry-based Quadratic Rolling Shutter (QRS) motion solver, which precisely estimates the high-order correction field of individual pixels.
Our method surpasses the state-of-the-art by +4.98, +0.77, and +4.33 of PSNR on Carla-RS, Fastec-RS, and BS-RSC datasets, respectively.
arXiv Detail & Related papers (2023-03-31T15:09:18Z) - Deep Learning for Camera Calibration and Beyond: A Survey [100.75060862015945]
Camera calibration involves estimating camera parameters to infer geometric features from captured sequences.
Recent efforts show that learning-based solutions have the potential to be used in place of the repeatability works of manual calibrations.
arXiv Detail & Related papers (2023-03-19T04:00:05Z) - Cross-View Cross-Scene Multi-View Crowd Counting [56.83882084112913]
Multi-view crowd counting has been previously proposed to utilize multi-cameras to extend the field-of-view of a single camera.
We propose a cross-view cross-scene (CVCS) multi-view crowd counting paradigm, where the training and testing occur on different scenes with arbitrary camera layouts.
arXiv Detail & Related papers (2022-05-03T15:03:44Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - Wide-Baseline Multi-Camera Calibration using Person Re-Identification [27.965850489928457]
We address the problem of estimating the 3D pose of a network of cameras for large-environment wide-baseline scenarios.
Treating people in the scene as "keypoints" and associating them across different camera views can be an alternative method for obtaining correspondences.
Our method first employs a re-ID method to associate human bounding boxes across cameras, then converts bounding box correspondences to point correspondences.
arXiv Detail & Related papers (2021-04-17T15:09:18Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - 6D Camera Relocalization in Ambiguous Scenes via Continuous Multimodal
Inference [67.70859730448473]
We present a multimodal camera relocalization framework that captures ambiguities and uncertainties.
We predict multiple camera pose hypotheses as well as the respective uncertainty for each prediction.
We introduce a new dataset specifically designed to foster camera localization research in ambiguous environments.
arXiv Detail & Related papers (2020-04-09T20:55:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.