Unified Data Collection for Visual-Inertial Calibration via Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2109.14974v1
- Date: Thu, 30 Sep 2021 10:03:56 GMT
- Title: Unified Data Collection for Visual-Inertial Calibration via Deep
Reinforcement Learning
- Authors: Yunke Ao, Le Chen, Florian Tschopp, Michel Breyer, Andrei Cramariuc,
Roland Siegwart
- Abstract summary: This work presents a novel formulation to learn a motion policy to be executed on a robot arm for automatic data collection.
Our approach models the calibration process compactly using model-free deep reinforcement learning.
In simulation we are able to perform calibrations 10 times faster than hand-crafted policies, which transfers to a real-world speed up of 3 times over a human expert.
- Score: 24.999540933593273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual-inertial sensors have a wide range of applications in robotics.
However, good performance often requires different sophisticated motion
routines to accurately calibrate camera intrinsics and inter-sensor extrinsics.
This work presents a novel formulation to learn a motion policy to be executed
on a robot arm for automatic data collection for calibrating intrinsics and
extrinsics jointly. Our approach models the calibration process compactly using
model-free deep reinforcement learning to derive a policy that guides the
motions of a robotic arm holding the sensor to efficiently collect measurements
that can be used for both camera intrinsic calibration and camera-IMU extrinsic
calibration. Given the current pose and collected measurements, the learned
policy generates the subsequent transformation that optimizes sensor
calibration accuracy. The evaluations in simulation and on a real robotic
system show that our learned policy generates favorable motion trajectories and
collects enough measurements efficiently that yield the desired intrinsics and
extrinsics with short path lengths. In simulation we are able to perform
calibrations 10 times faster than hand-crafted policies, which transfers to a
real-world speed up of 3 times over a human expert.
Related papers
- Joint Spatial-Temporal Calibration for Camera and Global Pose Sensor [0.4143603294943439]
In robotics, motion capture systems have been widely used to measure the accuracy of localization algorithms.
These functionalities require having accurate and reliable spatial-temporal calibration parameters between the camera and the global pose sensor.
In this study, we provide two novel solutions to estimate these calibration parameters.
arXiv Detail & Related papers (2024-03-01T20:56:14Z) - From Chaos to Calibration: A Geometric Mutual Information Approach to
Target-Free Camera LiDAR Extrinsic Calibration [4.378156825150505]
We propose a target free extrinsic calibration algorithm that requires no ground truth training data.
We demonstrate our proposed improvement using the KITTI and KITTI-360 fisheye data set.
arXiv Detail & Related papers (2023-11-03T13:30:31Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - MOISST: Multimodal Optimization of Implicit Scene for SpatioTemporal
calibration [4.405687114738899]
We take advantage of recent advances in computer graphics and implicit volumetric scene representation to tackle the problem of multi-sensor spatial and temporal calibration.
Our method enables accurate and robust calibration from data captured in uncontrolled and unstructured urban environments.
We demonstrate the accuracy and robustness of our method in urban scenes typically encountered in autonomous driving scenarios.
arXiv Detail & Related papers (2023-03-06T11:59:13Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Learning Trajectories for Visual-Inertial System Calibration via
Model-based Heuristic Deep Reinforcement Learning [34.58853427240756]
We present a novel approach to obtain favorable trajectories for visual-inertial system calibration using model-based deep reinforcement learning.
Our key contribution is to model the calibration process as a Markov decision process and then use model-based deep reinforcement learning with particle swarm optimization to establish a sequence of calibration trajectories to be performed by a robot arm.
arXiv Detail & Related papers (2020-11-04T23:20:15Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.