Learning Trajectories for Visual-Inertial System Calibration via
Model-based Heuristic Deep Reinforcement Learning
- URL: http://arxiv.org/abs/2011.02574v1
- Date: Wed, 4 Nov 2020 23:20:15 GMT
- Title: Learning Trajectories for Visual-Inertial System Calibration via
Model-based Heuristic Deep Reinforcement Learning
- Authors: Le Chen, Yunke Ao, Florian Tschopp, Andrei Cramariuc, Michel Breyer,
Jen Jen Chung, Roland Siegwart, Cesar Cadena
- Abstract summary: We present a novel approach to obtain favorable trajectories for visual-inertial system calibration using model-based deep reinforcement learning.
Our key contribution is to model the calibration process as a Markov decision process and then use model-based deep reinforcement learning with particle swarm optimization to establish a sequence of calibration trajectories to be performed by a robot arm.
- Score: 34.58853427240756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual-inertial systems rely on precise calibrations of both camera
intrinsics and inter-sensor extrinsics, which typically require manually
performing complex motions in front of a calibration target. In this work we
present a novel approach to obtain favorable trajectories for visual-inertial
system calibration, using model-based deep reinforcement learning. Our key
contribution is to model the calibration process as a Markov decision process
and then use model-based deep reinforcement learning with particle swarm
optimization to establish a sequence of calibration trajectories to be
performed by a robot arm. Our experiments show that while maintaining similar
or shorter path lengths, the trajectories generated by our learned policy
result in lower calibration errors compared to random or handcrafted
trajectories.
Related papers
- Reinforcement Learning Approach to Optimizing Profilometric Sensor Trajectories for Surface Inspection [0.0]
High-precision surface defect detection in manufacturing is essential for ensuring quality control.
Laser triangulation profil sensors are key to this process.
This paper presents a novel approach to optimize inspection trajectories for profilometric sensors.
arXiv Detail & Related papers (2024-09-05T11:20:12Z) - Kalib: Markerless Hand-Eye Calibration with Keypoint Tracking [52.4190876409222]
Hand-eye calibration involves estimating the transformation between the camera and the robot.
Recent advancements in deep learning offer markerless techniques, but they present challenges.
We propose Kalib, an automatic and universal markerless hand-eye calibration pipeline.
arXiv Detail & Related papers (2024-08-20T06:03:40Z) - Joint Spatial-Temporal Calibration for Camera and Global Pose Sensor [0.4143603294943439]
In robotics, motion capture systems have been widely used to measure the accuracy of localization algorithms.
These functionalities require having accurate and reliable spatial-temporal calibration parameters between the camera and the global pose sensor.
In this study, we provide two novel solutions to estimate these calibration parameters.
arXiv Detail & Related papers (2024-03-01T20:56:14Z) - RobustCalib: Robust Lidar-Camera Extrinsic Calibration with Consistency
Learning [42.90987864456673]
Current methods for LiDAR-camera extrinsics estimation depend on offline targets and human efforts.
We propose a novel approach to address the extrinsic calibration problem in a robust, automatic, and single-shot manner.
We conduct comprehensive experiments on different datasets, and the results demonstrate that our method achieves accurate and robust performance.
arXiv Detail & Related papers (2023-12-02T09:29:50Z) - On the Calibration of Large Language Models and Alignment [63.605099174744865]
Confidence calibration serves as a crucial tool for gauging the reliability of deep models.
We conduct a systematic examination of the calibration of aligned language models throughout the entire construction process.
Our work sheds light on whether popular LLMs are well-calibrated and how the training process influences model calibration.
arXiv Detail & Related papers (2023-11-22T08:57:55Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Unified Data Collection for Visual-Inertial Calibration via Deep
Reinforcement Learning [24.999540933593273]
This work presents a novel formulation to learn a motion policy to be executed on a robot arm for automatic data collection.
Our approach models the calibration process compactly using model-free deep reinforcement learning.
In simulation we are able to perform calibrations 10 times faster than hand-crafted policies, which transfers to a real-world speed up of 3 times over a human expert.
arXiv Detail & Related papers (2021-09-30T10:03:56Z) - Meta-Calibration: Learning of Model Calibration Using Differentiable
Expected Calibration Error [46.12703434199988]
We introduce a new differentiable surrogate for expected calibration error (DECE) that allows calibration quality to be directly optimised.
We also propose a meta-learning framework that uses DECE to optimise for validation set calibration.
arXiv Detail & Related papers (2021-06-17T15:47:50Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.