Robot Self-Calibration Using Actuated 3D Sensors
- URL: http://arxiv.org/abs/2206.03430v1
- Date: Tue, 7 Jun 2022 16:35:08 GMT
- Title: Robot Self-Calibration Using Actuated 3D Sensors
- Authors: Arne Peters
- Abstract summary: This paper treats robot calibration as an offline SLAM problem, where scanning poses are linked to a fixed point in space by a moving kinematic chain.
As such, the presented framework allows robot calibration using nothing but an arbitrary eye-in-hand depth sensor.
A detailed evaluation of the system is shown on a real robot with various attached 3D sensors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Both, robot and hand-eye calibration haven been object to research for
decades. While current approaches manage to precisely and robustly identify the
parameters of a robot's kinematic model, they still rely on external devices,
such as calibration objects, markers and/or external sensors. Instead of trying
to fit the recorded measurements to a model of a known object, this paper
treats robot calibration as an offline SLAM problem, where scanning poses are
linked to a fixed point in space by a moving kinematic chain. As such, the
presented framework allows robot calibration using nothing but an arbitrary
eye-in-hand depth sensor, thus enabling fully autonomous self-calibration
without any external tools. My new approach is utilizes a modified version of
the Iterative Closest Point algorithm to run bundle adjustment on multiple 3D
recordings estimating the optimal parameters of the kinematic model. A detailed
evaluation of the system is shown on a real robot with various attached 3D
sensors. The presented results show that the system reaches precision
comparable to a dedicated external tracking system at a fraction of its cost.
Related papers
- Kalib: Markerless Hand-Eye Calibration with Keypoint Tracking [52.4190876409222]
Hand-eye calibration involves estimating the transformation between the camera and the robot.
Recent advancements in deep learning offer markerless techniques, but they present challenges.
We propose Kalib, an automatic and universal markerless hand-eye calibration pipeline.
arXiv Detail & Related papers (2024-08-20T06:03:40Z) - External Camera-based Mobile Robot Pose Estimation for Collaborative
Perception with Smart Edge Sensors [22.5939915003931]
We present an approach for estimating a mobile robot's pose w.r.t. the allocentric coordinates of a network of static cameras using multi-view RGB images.
The images are processed online, locally on smart edge sensors by deep neural networks to detect the robot.
With the robot's pose precisely estimated, its observations can be fused into the allocentric scene model.
arXiv Detail & Related papers (2023-03-07T11:03:33Z) - Continuous Target-free Extrinsic Calibration of a Multi-Sensor System
from a Sequence of Static Viewpoints [0.0]
Mobile robotic applications need precise information about the geometric position of the individual sensors on the platform.
Erroneous calibration parameters have a negative impact on typical robotic estimation tasks.
We propose a new method for a continuous estimation of the calibration parameters during operation of the robot.
arXiv Detail & Related papers (2022-07-08T09:36:17Z) - Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for
Multi-Robot Systems [92.26462290867963]
Kimera-Multi is the first multi-robot system that is robust and capable of identifying and rejecting incorrect inter and intra-robot loop closures.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots.
arXiv Detail & Related papers (2021-06-28T03:56:40Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Nothing But Geometric Constraints: A Model-Free Method for Articulated
Object Pose Estimation [89.82169646672872]
We propose an unsupervised vision-based system to estimate the joint configurations of the robot arm from a sequence of RGB or RGB-D images without knowing the model a priori.
We combine a classical geometric formulation with deep learning and extend the use of epipolar multi-rigid-body constraints to solve this task.
arXiv Detail & Related papers (2020-11-30T20:46:48Z) - Pose Estimation for Robot Manipulators via Keypoint Optimization and
Sim-to-Real Transfer [10.369766652751169]
Keypoint detection is an essential building block for many robotic applications.
Deep learning methods have the ability to detect user-defined keypoints in a marker-less manner.
We propose a new and autonomous way to define the keypoint locations that overcomes these challenges.
arXiv Detail & Related papers (2020-10-15T22:38:37Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - Spatiotemporal Camera-LiDAR Calibration: A Targetless and Structureless
Approach [32.15405927679048]
We propose a targetless and structureless camera-DAR calibration method.
Our method combines a closed-form solution with a structureless bundle where the coarse-to-fine approach does not require an initial adjustment on the temporal parameters.
We demonstrate the accuracy and robustness of the proposed method through both simulation and real data experiments.
arXiv Detail & Related papers (2020-01-17T07:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.