EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration
- URL: http://arxiv.org/abs/2305.01191v2
- Date: Tue, 7 Nov 2023 04:40:13 GMT
- Title: EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration
- Authors: Linghao Chen, Yuzhe Qin, Xiaowei Zhou, Hao Su
- Abstract summary: We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
- Score: 49.90228618894857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hand-eye calibration is a critical task in robotics, as it directly affects
the efficacy of critical operations such as manipulation and grasping.
Traditional methods for achieving this objective necessitate the careful design
of joint poses and the use of specialized calibration markers, while most
recent learning-based approaches using solely pose regression are limited in
their abilities to diagnose inaccuracies. In this work, we introduce a new
approach to hand-eye calibration called EasyHeC, which is markerless,
white-box, and delivers superior accuracy and robustness. We propose to use two
key technologies: differentiable rendering-based camera pose optimization and
consistency-based joint space exploration, which enables accurate end-to-end
optimization of the calibration process and eliminates the need for the
laborious manual design of robot joint poses. Our evaluation demonstrates
superior performance in synthetic and real-world datasets, enhancing downstream
manipulation tasks by providing precise camera poses for locating and
interacting with objects. The code is available at the project page:
https://ootts.github.io/easyhec.
Related papers
- Kalib: Markerless Hand-Eye Calibration with Keypoint Tracking [52.4190876409222]
Hand-eye calibration involves estimating the transformation between the camera and the robot.
Recent advancements in deep learning offer markerless techniques, but they present challenges.
We propose Kalib, an automatic and universal markerless hand-eye calibration pipeline.
arXiv Detail & Related papers (2024-08-20T06:03:40Z) - Multi-Camera Hand-Eye Calibration for Human-Robot Collaboration in Industrial Robotic Workcells [3.76054468268713]
In industrial scenarios, effective human-robot collaboration relies on multi-camera systems to robustly monitor human operators.
We introduce an innovative and robust multi-camera hand-eye calibration method, designed to optimize each camera's pose relative to both the robot's base and to each other camera.
We demonstrate the superior performance of our method through comprehensive experiments employing the METRIC dataset and real-world data collected on industrial scenarios.
arXiv Detail & Related papers (2024-06-17T10:23:30Z) - VICAN: Very Efficient Calibration Algorithm for Large Camera Networks [49.17165360280794]
We introduce a novel methodology that extends Pose Graph Optimization techniques.
We consider the bipartite graph encompassing cameras, object poses evolving dynamically, and camera-object relative transformations at each time step.
Our framework retains compatibility with traditional PGO solvers, but its efficacy benefits from a custom-tailored optimization scheme.
arXiv Detail & Related papers (2024-03-25T17:47:03Z) - Joint Spatial-Temporal Calibration for Camera and Global Pose Sensor [0.4143603294943439]
In robotics, motion capture systems have been widely used to measure the accuracy of localization algorithms.
These functionalities require having accurate and reliable spatial-temporal calibration parameters between the camera and the global pose sensor.
In this study, we provide two novel solutions to estimate these calibration parameters.
arXiv Detail & Related papers (2024-03-01T20:56:14Z) - P2O-Calib: Camera-LiDAR Calibration Using Point-Pair Spatial Occlusion
Relationship [1.6921147361216515]
We propose a novel target-less calibration approach based on the 2D-3D edge point extraction using the occlusion relationship in 3D space.
Our method achieves low error and high robustness that can contribute to the practical applications relying on high-quality Camera-LiDAR calibration.
arXiv Detail & Related papers (2023-11-04T14:32:55Z) - Lasers to Events: Automatic Extrinsic Calibration of Lidars and Event
Cameras [67.84498757689776]
This paper presents the first direct calibration method between event cameras and lidars.
It removes dependencies on frame-based camera intermediaries and/or highly-accurate hand measurements.
arXiv Detail & Related papers (2022-07-03T11:05:45Z) - Unified Data Collection for Visual-Inertial Calibration via Deep
Reinforcement Learning [24.999540933593273]
This work presents a novel formulation to learn a motion policy to be executed on a robot arm for automatic data collection.
Our approach models the calibration process compactly using model-free deep reinforcement learning.
In simulation we are able to perform calibrations 10 times faster than hand-crafted policies, which transfers to a real-world speed up of 3 times over a human expert.
arXiv Detail & Related papers (2021-09-30T10:03:56Z) - Online Body Schema Adaptation through Cost-Sensitive Active Learning [63.84207660737483]
The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
arXiv Detail & Related papers (2021-01-26T16:01:02Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.