Unified Data Collection for Visual-Inertial Calibration via Deep
Reinforcement Learning
- URL: http://arxiv.org/abs/2109.14974v1
- Date: Thu, 30 Sep 2021 10:03:56 GMT
- Title: Unified Data Collection for Visual-Inertial Calibration via Deep
Reinforcement Learning
- Authors: Yunke Ao, Le Chen, Florian Tschopp, Michel Breyer, Andrei Cramariuc,
Roland Siegwart
- Abstract summary: This work presents a novel formulation to learn a motion policy to be executed on a robot arm for automatic data collection.
Our approach models the calibration process compactly using model-free deep reinforcement learning.
In simulation we are able to perform calibrations 10 times faster than hand-crafted policies, which transfers to a real-world speed up of 3 times over a human expert.
- Score: 24.999540933593273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual-inertial sensors have a wide range of applications in robotics.
However, good performance often requires different sophisticated motion
routines to accurately calibrate camera intrinsics and inter-sensor extrinsics.
This work presents a novel formulation to learn a motion policy to be executed
on a robot arm for automatic data collection for calibrating intrinsics and
extrinsics jointly. Our approach models the calibration process compactly using
model-free deep reinforcement learning to derive a policy that guides the
motions of a robotic arm holding the sensor to efficiently collect measurements
that can be used for both camera intrinsic calibration and camera-IMU extrinsic
calibration. Given the current pose and collected measurements, the learned
policy generates the subsequent transformation that optimizes sensor
calibration accuracy. The evaluations in simulation and on a real robotic
system show that our learned policy generates favorable motion trajectories and
collects enough measurements efficiently that yield the desired intrinsics and
extrinsics with short path lengths. In simulation we are able to perform
calibrations 10 times faster than hand-crafted policies, which transfers to a
real-world speed up of 3 times over a human expert.
Related papers
- Neural Real-Time Recalibration for Infrared Multi-Camera Systems [2.249916681499244]
There are no learning-free or neural techniques for real-time recalibration of infrared multi-camera systems.
We propose a neural network-based method capable of dynamic real-time calibration.
arXiv Detail & Related papers (2024-10-18T14:37:37Z) - Reinforcement Learning Approach to Optimizing Profilometric Sensor Trajectories for Surface Inspection [0.0]
High-precision surface defect detection in manufacturing is essential for ensuring quality control.
Laser triangulation profil sensors are key to this process.
This paper presents a novel approach to optimize inspection trajectories for profilometric sensors.
arXiv Detail & Related papers (2024-09-05T11:20:12Z) - Kalib: Markerless Hand-Eye Calibration with Keypoint Tracking [52.4190876409222]
Hand-eye calibration involves estimating the transformation between the camera and the robot.
Recent advancements in deep learning offer markerless techniques, but they present challenges.
We propose Kalib, an automatic and universal markerless hand-eye calibration pipeline.
arXiv Detail & Related papers (2024-08-20T06:03:40Z) - Joint Spatial-Temporal Calibration for Camera and Global Pose Sensor [0.4143603294943439]
In robotics, motion capture systems have been widely used to measure the accuracy of localization algorithms.
These functionalities require having accurate and reliable spatial-temporal calibration parameters between the camera and the global pose sensor.
In this study, we provide two novel solutions to estimate these calibration parameters.
arXiv Detail & Related papers (2024-03-01T20:56:14Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Learning Trajectories for Visual-Inertial System Calibration via
Model-based Heuristic Deep Reinforcement Learning [34.58853427240756]
We present a novel approach to obtain favorable trajectories for visual-inertial system calibration using model-based deep reinforcement learning.
Our key contribution is to model the calibration process as a Markov decision process and then use model-based deep reinforcement learning with particle swarm optimization to establish a sequence of calibration trajectories to be performed by a robot arm.
arXiv Detail & Related papers (2020-11-04T23:20:15Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.