Continual Learning from Demonstration of Robotics Skills
- URL: http://arxiv.org/abs/2202.06843v4
- Date: Wed, 12 Apr 2023 19:05:34 GMT
- Title: Continual Learning from Demonstration of Robotics Skills
- Authors: Sayantan Auddy, Jakob Hollenstein, Matteo Saveriano, Antonio
Rodr\'iguez-S\'anchez and Justus Piater
- Abstract summary: Methods for teaching motion skills to robots focus on training for a single skill at a time.
We propose an approach for continual learning from demonstration using hypernetworks and neural ordinary differential equation solvers.
- Score: 5.573543601558405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Methods for teaching motion skills to robots focus on training for a single
skill at a time. Robots capable of learning from demonstration can considerably
benefit from the added ability to learn new movement skills without forgetting
what was learned in the past. To this end, we propose an approach for continual
learning from demonstration using hypernetworks and neural ordinary
differential equation solvers. We empirically demonstrate the effectiveness of
this approach in remembering long sequences of trajectory learning tasks
without the need to store any data from past demonstrations. Our results show
that hypernetworks outperform other state-of-the-art continual learning
approaches for learning from demonstration. In our experiments, we use the
popular LASA benchmark, and two new datasets of kinesthetic demonstrations
collected with a real robot that we introduce in this paper called the
HelloWorld and RoboTasks datasets. We evaluate our approach on a physical robot
and demonstrate its effectiveness in learning real-world robotic tasks
involving changing positions as well as orientations. We report both trajectory
error metrics and continual learning metrics, and we propose two new continual
learning metrics. Our code, along with the newly collected datasets, is
available at https://github.com/sayantanauddy/clfd.
Related papers
- VITAL: Visual Teleoperation to Enhance Robot Learning through Human-in-the-Loop Corrections [10.49712834719005]
We propose a low-cost visual teleoperation system for bimanual manipulation tasks, called VITAL.
Our approach leverages affordable hardware and visual processing techniques to collect demonstrations.
We enhance the generalizability and robustness of the learned policies by utilizing both real and simulated environments.
arXiv Detail & Related papers (2024-07-30T23:29:47Z) - Any-point Trajectory Modeling for Policy Learning [64.23861308947852]
We introduce Any-point Trajectory Modeling (ATM) to predict future trajectories of arbitrary points within a video frame.
ATM outperforms strong video pre-training baselines by 80% on average.
We show effective transfer learning of manipulation skills from human videos and videos from a different robot morphology.
arXiv Detail & Related papers (2023-12-28T23:34:43Z) - How Can Everyday Users Efficiently Teach Robots by Demonstrations? [3.6145826787059643]
We propose to use a measure of uncertainty, namely task-related information entropy, as a criterion for suggesting informative demonstration examples to human teachers.
The results indicated a substantial improvement in robot learning efficiency from the teacher's demonstrations.
arXiv Detail & Related papers (2023-10-19T18:21:39Z) - Exploring Visual Pre-training for Robot Manipulation: Datasets, Models
and Methods [14.780597545674157]
We investigate the effects of visual pre-training strategies on robot manipulation tasks from three fundamental perspectives.
We propose a visual pre-training scheme for robot manipulation termed Vi-PRoM, which combines self-supervised learning and supervised learning.
arXiv Detail & Related papers (2023-08-07T14:24:52Z) - Scaling Robot Learning with Semantically Imagined Experience [21.361979238427722]
Recent advances in robot learning have shown promise in enabling robots to perform manipulation tasks.
One of the key contributing factors to this progress is the scale of robot data used to train the models.
We propose an alternative route and leverage text-to-image foundation models widely used in computer vision and natural language processing.
arXiv Detail & Related papers (2023-02-22T18:47:51Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - A Framework for Efficient Robotic Manipulation [79.10407063260473]
We show that a single robotic arm can learn sparse-reward manipulation policies from pixels.
We show that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels.
arXiv Detail & Related papers (2020-12-14T22:18:39Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.