Learning Markerless Robot-Depth Camera Calibration and End-Effector Pose
Estimation
- URL: http://arxiv.org/abs/2212.07567v1
- Date: Thu, 15 Dec 2022 00:53:42 GMT
- Title: Learning Markerless Robot-Depth Camera Calibration and End-Effector Pose
Estimation
- Authors: Bugra C. Sefercik, Baris Akgun
- Abstract summary: We present a learning-based markerless extrinsic calibration system that uses a depth camera and does not rely on simulation data.
We learn models for end-effector (EE) segmentation, single-frame rotation prediction and keypoint detection, from automatically generated real-world data.
Our robustness with training data from multiple camera poses and test data from previously unseen poses give sub-centimeter evaluations and sub-deciradian average calibration and pose estimation errors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional approaches to extrinsic calibration use fiducial markers and
learning-based approaches rely heavily on simulation data. In this work, we
present a learning-based markerless extrinsic calibration system that uses a
depth camera and does not rely on simulation data. We learn models for
end-effector (EE) segmentation, single-frame rotation prediction and keypoint
detection, from automatically generated real-world data. We use a
transformation trick to get EE pose estimates from rotation predictions and a
matching algorithm to get EE pose estimates from keypoint predictions. We
further utilize the iterative closest point algorithm, multiple-frames,
filtering and outlier detection to increase calibration robustness. Our
evaluations with training data from multiple camera poses and test data from
previously unseen poses give sub-centimeter and sub-deciradian average
calibration and pose estimation errors. We also show that a carefully selected
single training pose gives comparable results.
Related papers
- RGB-based Category-level Object Pose Estimation via Decoupled Metric
Scale Recovery [72.13154206106259]
We propose a novel pipeline that decouples the 6D pose and size estimation to mitigate the influence of imperfect scales on rigid transformations.
Specifically, we leverage a pre-trained monocular estimator to extract local geometric information.
A separate branch is designed to directly recover the metric scale of the object based on category-level statistics.
arXiv Detail & Related papers (2023-09-19T02:20:26Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - Deep Learning for Camera Calibration and Beyond: A Survey [100.75060862015945]
Camera calibration involves estimating camera parameters to infer geometric features from captured sequences.
Recent efforts show that learning-based solutions have the potential to be used in place of the repeatability works of manual calibrations.
arXiv Detail & Related papers (2023-03-19T04:00:05Z) - Single image calibration using knowledge distillation approaches [1.7205106391379026]
We build upon a CNN architecture to automatically estimate camera parameters.
We adapt four common incremental learning strategies to preserve knowledge when updating the network for new data distributions.
Experiment results were significant and indicated which method was better for the camera calibration estimation.
arXiv Detail & Related papers (2022-12-05T15:59:35Z) - Self-Supervised Camera Self-Calibration from Video [34.35533943247917]
We propose a learning algorithm to regress per-sequence calibration parameters using an efficient family of general camera models.
Our procedure achieves self-calibration results with sub-pixel reprojection error, outperforming other learning-based methods.
arXiv Detail & Related papers (2021-12-06T19:42:05Z) - Learning Eye-in-Hand Camera Calibration from a Single Image [7.262048441360133]
Eye-in-hand camera calibration is a fundamental and long-studied problem in robotics.
We present a study on using learning-based methods for solving this problem online from a single RGB image.
arXiv Detail & Related papers (2021-11-01T20:17:31Z) - Uncertainty-Aware Camera Pose Estimation from Points and Lines [101.03675842534415]
Perspective-n-Point-and-Line (Pn$PL) aims at fast, accurate and robust camera localizations with respect to a 3D model from 2D-3D feature coordinates.
arXiv Detail & Related papers (2021-07-08T15:19:36Z) - Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for
Unsupervised Person Re-Identification [60.36551512902312]
unsupervised person re-identification (re-ID) aims to learn discriminative models with unlabeled data.
One popular method is to obtain pseudo-label by clustering and use them to optimize the model.
In this paper, we propose a unified framework to solve both problems.
arXiv Detail & Related papers (2021-03-08T09:13:06Z) - IMU-Assisted Learning of Single-View Rolling Shutter Correction [16.242924916178282]
Rolling shutter distortion is highly undesirable for photography and computer vision algorithms.
We propose a deep neural network to predict depth and row-wise pose from a single image for rolling shutter correction.
arXiv Detail & Related papers (2020-11-05T21:33:25Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - PrimA6D: Rotational Primitive Reconstruction for Enhanced and Robust 6D
Pose Estimation [11.873744190924599]
We introduce a rotational primitive prediction based 6D object pose estimation using a single image as an input.
We leverage a Variational AutoEncoder (VAE) to learn this underlying primitive and its associated keypoints.
When evaluated over public datasets, our method yields a notable improvement over LINEMOD, Occlusion LINEMOD, and the Y-induced dataset.
arXiv Detail & Related papers (2020-06-14T03:55:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.