Joint Spatial-Temporal Calibration for Camera and Global Pose Sensor
- URL: http://arxiv.org/abs/2403.00976v1
- Date: Fri, 1 Mar 2024 20:56:14 GMT
- Title: Joint Spatial-Temporal Calibration for Camera and Global Pose Sensor
- Authors: Junlin Song, Antoine Richard, Miguel Olivares-Mendez
- Abstract summary: In robotics, motion capture systems have been widely used to measure the accuracy of localization algorithms.
These functionalities require having accurate and reliable spatial-temporal calibration parameters between the camera and the global pose sensor.
In this study, we provide two novel solutions to estimate these calibration parameters.
- Score: 0.4143603294943439
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In robotics, motion capture systems have been widely used to measure the
accuracy of localization algorithms. Moreover, this infrastructure can also be
used for other computer vision tasks, such as the evaluation of Visual
(-Inertial) SLAM dynamic initialization, multi-object tracking, or automatic
annotation. Yet, to work optimally, these functionalities require having
accurate and reliable spatial-temporal calibration parameters between the
camera and the global pose sensor. In this study, we provide two novel
solutions to estimate these calibration parameters. Firstly, we design an
offline target-based method with high accuracy and consistency.
Spatial-temporal parameters, camera intrinsic, and trajectory are optimized
simultaneously. Then, we propose an online target-less method, eliminating the
need for a calibration target and enabling the estimation of time-varying
spatial-temporal parameters. Additionally, we perform detailed observability
analysis for the target-less method. Our theoretical findings regarding
observability are validated by simulation experiments and provide explainable
guidelines for calibration. Finally, the accuracy and consistency of two
proposed methods are evaluated with hand-held real-world datasets where
traditional hand-eye calibration method do not work.
Related papers
- Kalib: Markerless Hand-Eye Calibration with Keypoint Tracking [52.4190876409222]
Hand-eye calibration involves estimating the transformation between the camera and the robot.
Recent advancements in deep learning offer markerless techniques, but they present challenges.
We propose Kalib, an automatic and universal markerless hand-eye calibration pipeline.
arXiv Detail & Related papers (2024-08-20T06:03:40Z) - P2O-Calib: Camera-LiDAR Calibration Using Point-Pair Spatial Occlusion
Relationship [1.6921147361216515]
We propose a novel target-less calibration approach based on the 2D-3D edge point extraction using the occlusion relationship in 3D space.
Our method achieves low error and high robustness that can contribute to the practical applications relying on high-quality Camera-LiDAR calibration.
arXiv Detail & Related papers (2023-11-04T14:32:55Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Continuous Target-free Extrinsic Calibration of a Multi-Sensor System
from a Sequence of Static Viewpoints [0.0]
Mobile robotic applications need precise information about the geometric position of the individual sensors on the platform.
Erroneous calibration parameters have a negative impact on typical robotic estimation tasks.
We propose a new method for a continuous estimation of the calibration parameters during operation of the robot.
arXiv Detail & Related papers (2022-07-08T09:36:17Z) - Unified Data Collection for Visual-Inertial Calibration via Deep
Reinforcement Learning [24.999540933593273]
This work presents a novel formulation to learn a motion policy to be executed on a robot arm for automatic data collection.
Our approach models the calibration process compactly using model-free deep reinforcement learning.
In simulation we are able to perform calibrations 10 times faster than hand-crafted policies, which transfers to a real-world speed up of 3 times over a human expert.
arXiv Detail & Related papers (2021-09-30T10:03:56Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Pushing the Envelope of Rotation Averaging for Visual SLAM [69.7375052440794]
We propose a novel optimization backbone for visual SLAM systems.
We leverage averaging to improve the accuracy, efficiency and robustness of conventional monocular SLAM systems.
Our approach can exhibit up to 10x faster with comparable accuracy against the state-art on public benchmarks.
arXiv Detail & Related papers (2020-11-02T18:02:26Z) - Spatiotemporal Camera-LiDAR Calibration: A Targetless and Structureless
Approach [32.15405927679048]
We propose a targetless and structureless camera-DAR calibration method.
Our method combines a closed-form solution with a structureless bundle where the coarse-to-fine approach does not require an initial adjustment on the temporal parameters.
We demonstrate the accuracy and robustness of the proposed method through both simulation and real data experiments.
arXiv Detail & Related papers (2020-01-17T07:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.