Continuous Target-free Extrinsic Calibration of a Multi-Sensor System
from a Sequence of Static Viewpoints
- URL: http://arxiv.org/abs/2207.03785v1
- Date: Fri, 8 Jul 2022 09:36:17 GMT
- Title: Continuous Target-free Extrinsic Calibration of a Multi-Sensor System
from a Sequence of Static Viewpoints
- Authors: Philipp Glira, Christoph Weidinger, Johann Weichselbaum
- Abstract summary: Mobile robotic applications need precise information about the geometric position of the individual sensors on the platform.
Erroneous calibration parameters have a negative impact on typical robotic estimation tasks.
We propose a new method for a continuous estimation of the calibration parameters during operation of the robot.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mobile robotic applications need precise information about the geometric
position of the individual sensors on the platform. This information is given
by the extrinsic calibration parameters which define how the sensor is rotated
and translated with respect to a fixed reference coordinate system. Erroneous
calibration parameters have a negative impact on typical robotic estimation
tasks, e.g. SLAM. In this work we propose a new method for a continuous
estimation of the calibration parameters during operation of the robot. The
parameter estimation is based on the matching of point clouds which are
acquired by the sensors from multiple static viewpoints. Consequently, our
method does not need any special calibration targets and is applicable to any
sensor whose measurements can be converted to point clouds. We demonstrate the
suitability of our method by calibrating a multi-sensor system composed by 2
lidar sensors, 3 cameras, and an imaging radar sensor.
Related papers
- Kalib: Markerless Hand-Eye Calibration with Keypoint Tracking [52.4190876409222]
Hand-eye calibration involves estimating the transformation between the camera and the robot.
Recent advancements in deep learning offer markerless techniques, but they present challenges.
We propose Kalib, an automatic and universal markerless hand-eye calibration pipeline.
arXiv Detail & Related papers (2024-08-20T06:03:40Z) - Joint Spatial-Temporal Calibration for Camera and Global Pose Sensor [0.4143603294943439]
In robotics, motion capture systems have been widely used to measure the accuracy of localization algorithms.
These functionalities require having accurate and reliable spatial-temporal calibration parameters between the camera and the global pose sensor.
In this study, we provide two novel solutions to estimate these calibration parameters.
arXiv Detail & Related papers (2024-03-01T20:56:14Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - SST-Calib: Simultaneous Spatial-Temporal Parameter Calibration between
LIDAR and Camera [26.59231069298659]
A segmentation-based framework is proposed to jointly estimate the geometrical and temporal parameters in the calibration of a camera-LIDAR suite.
The proposed algorithm is tested on the KITTI dataset, and the result shows an accurate real-time calibration of both geometric and temporal parameters.
arXiv Detail & Related papers (2022-07-08T06:21:52Z) - Robot Self-Calibration Using Actuated 3D Sensors [0.0]
This paper treats robot calibration as an offline SLAM problem, where scanning poses are linked to a fixed point in space by a moving kinematic chain.
As such, the presented framework allows robot calibration using nothing but an arbitrary eye-in-hand depth sensor.
A detailed evaluation of the system is shown on a real robot with various attached 3D sensors.
arXiv Detail & Related papers (2022-06-07T16:35:08Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Robust calibration of multiparameter sensors via machine learning at the
single-photon level [0.0]
We demonstrate the application of a Neural Network based algorithm for the calibration of integrated photonic devices.
We show that a reliable characterization is achievable by carefully selecting an appropriate network training strategy.
arXiv Detail & Related papers (2020-09-15T14:22:47Z) - Automatic LiDAR Extrinsic Calibration System using Photodetector and
Planar Board for Large-scale Applications [110.32028864986918]
This study proposes a new concept of a target board with embedded photodetector arrays, named the PD-target system, to find the precise position of the correspondence laser beams on the target surface.
The experimental evaluation of the proposed system on low-resolution LiDAR showed that the LiDAR offset pose can be estimated within 0.1 degree and 3 mm levels of precision.
arXiv Detail & Related papers (2020-08-24T16:28:40Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z) - Spatiotemporal Camera-LiDAR Calibration: A Targetless and Structureless
Approach [32.15405927679048]
We propose a targetless and structureless camera-DAR calibration method.
Our method combines a closed-form solution with a structureless bundle where the coarse-to-fine approach does not require an initial adjustment on the temporal parameters.
We demonstrate the accuracy and robustness of the proposed method through both simulation and real data experiments.
arXiv Detail & Related papers (2020-01-17T07:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.