Online Body Schema Adaptation through Cost-Sensitive Active Learning
- URL: http://arxiv.org/abs/2101.10892v1
- Date: Tue, 26 Jan 2021 16:01:02 GMT
- Title: Online Body Schema Adaptation through Cost-Sensitive Active Learning
- Authors: Gon\c{c}alo Cunha, Pedro Vicente, Alexandre Bernardino, Ricardo
Ribeiro, Pl\'inio Moreno
- Abstract summary: The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator.
A cost-sensitive active learning approach is used to select optimal joint configurations.
The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.
- Score: 63.84207660737483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humanoid robots have complex bodies and kinematic chains with several
Degrees-of-Freedom (DoF) which are difficult to model. Learning the parameters
of a kinematic model can be achieved by observing the position of the robot
links during prospective motions and minimising the prediction errors. This
work proposes a movement efficient approach for estimating online the
body-schema of a humanoid robot arm in the form of Denavit-Hartenberg (DH)
parameters. A cost-sensitive active learning approach based on the A-Optimality
criterion is used to select optimal joint configurations. The chosen joint
configurations simultaneously minimise the error in the estimation of the body
schema and minimise the movement between samples. This reduces energy
consumption, along with mechanical fatigue and wear, while not compromising the
learning accuracy. The work was implemented in a simulation environment, using
the 7DoF arm of the iCub robot simulator. The hand pose is measured with a
single camera via markers placed in the palm and back of the robot's hand. A
non-parametric occlusion model is proposed to avoid choosing joint
configurations where the markers are not visible, thus preventing worthless
attempts. The results show cost-sensitive active learning has similar accuracy
to the standard active learning approach, while reducing in about half the
executed movement.
Related papers
- OptiState: State Estimation of Legged Robots using Gated Networks with Transformer-based Vision and Kalman Filtering [42.817893456964]
State estimation for legged robots is challenging due to their highly dynamic motion and limitations imposed by sensor accuracy.
We propose a hybrid solution that combines proprioception and exteroceptive information for estimating the state of the robot's trunk.
This framework not only furnishes accurate robot state estimates, but can minimize the nonlinear errors that arise from sensor measurements and model simplifications through learning.
arXiv Detail & Related papers (2024-01-30T03:34:25Z) - Enhanced Human-Robot Collaboration using Constrained Probabilistic
Human-Motion Prediction [5.501477817904299]
We propose a novel human motion prediction framework that incorporates human joint constraints and scene constraints.
It is tested on a human arm kinematic model and implemented on a human-robot collaborative setup with a UR5 robot arm.
arXiv Detail & Related papers (2023-10-05T05:12:14Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Model Predictive Control for Fluid Human-to-Robot Handovers [50.72520769938633]
Planning motions that take human comfort into account is not a part of the human-robot handover process.
We propose to generate smooth motions via an efficient model-predictive control framework.
We conduct human-to-robot handover experiments on a diverse set of objects with several users.
arXiv Detail & Related papers (2022-03-31T23:08:20Z) - Nonprehensile Riemannian Motion Predictive Control [57.295751294224765]
We introduce a novel Real-to-Sim reward analysis technique to reliably imagine and predict the outcome of taking possible actions for a real robotic platform.
We produce a closed-loop controller to reactively push objects in a continuous action space.
We observe that RMPC is robust in cluttered as well as occluded environments and outperforms the baselines.
arXiv Detail & Related papers (2021-11-15T18:50:04Z) - OSCAR: Data-Driven Operational Space Control for Adaptive and Robust
Robot Manipulation [50.59541802645156]
Operational Space Control (OSC) has been used as an effective task-space controller for manipulation.
We propose OSC for Adaptation and Robustness (OSCAR), a data-driven variant of OSC that compensates for modeling errors.
We evaluate our method on a variety of simulated manipulation problems, and find substantial improvements over an array of controller baselines.
arXiv Detail & Related papers (2021-10-02T01:21:38Z) - Relative Localization of Mobile Robots with Multiple Ultra-WideBand
Ranging Measurements [15.209043435869189]
We propose an approach to estimate the relative pose between a group of robots by equipping each robot with multiple UWB ranging nodes.
To improve the localization accuracy, we propose to utilize the odometry constraints through a sliding window-based optimization.
arXiv Detail & Related papers (2021-07-19T12:57:02Z) - Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with
Deep Reinforcement Learning [42.525696463089794]
Model Predictive Actor-Critic (MoPAC) is a hybrid model-based/model-free method that combines model predictive rollouts with policy optimization as to mitigate model bias.
MoPAC guarantees optimal skill learning up to an approximation error and reduces necessary physical interaction with the environment.
arXiv Detail & Related papers (2021-03-25T13:50:24Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.