How Can Everyday Users Efficiently Teach Robots by Demonstrations?
- URL: http://arxiv.org/abs/2310.13083v1
- Date: Thu, 19 Oct 2023 18:21:39 GMT
- Title: How Can Everyday Users Efficiently Teach Robots by Demonstrations?
- Authors: Maram Sakr, Zhikai Zhang, Benjamin Li, Haomiao Zhang, H.F. Machiel Van
der Loos, Dana Kulic and Elizabeth Croft
- Abstract summary: We propose to use a measure of uncertainty, namely task-related information entropy, as a criterion for suggesting informative demonstration examples to human teachers.
The results indicated a substantial improvement in robot learning efficiency from the teacher's demonstrations.
- Score: 3.6145826787059643
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning from Demonstration (LfD) is a framework that allows lay users to
easily program robots. However, the efficiency of robot learning and the
robot's ability to generalize to task variations hinges upon the quality and
quantity of the provided demonstrations. Our objective is to guide human
teachers to furnish more effective demonstrations, thus facilitating efficient
robot learning. To achieve this, we propose to use a measure of uncertainty,
namely task-related information entropy, as a criterion for suggesting
informative demonstration examples to human teachers to improve their teaching
skills. In a conducted experiment (N=24), an augmented reality (AR)-based
guidance system was employed to train novice users to produce additional
demonstrations from areas with the highest entropy within the workspace. These
novice users were trained for a few trials to teach the robot a generalizable
task using a limited number of demonstrations. Subsequently, the users'
performance after training was assessed first on the same task (retention) and
then on a novel task (transfer) without guidance. The results indicated a
substantial improvement in robot learning efficiency from the teacher's
demonstrations, with an improvement of up to 198% observed on the novel task.
Furthermore, the proposed approach was compared to a state-of-the-art heuristic
rule and found to improve robot learning efficiency by 210% compared to the
heuristic rule.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Affordance-Guided Reinforcement Learning via Visual Prompting [51.361977466993345]
Keypoint-based Affordance Guidance for Improvements (KAGI) is a method leveraging rewards shaped by vision-language models (VLMs) for autonomous RL.
On real-world manipulation tasks specified by natural language descriptions, KAGI improves the sample efficiency of autonomous RL and enables successful task completion in 20K online fine-tuning steps.
arXiv Detail & Related papers (2024-07-14T21:41:29Z) - Augmented Reality Demonstrations for Scalable Robot Imitation Learning [25.026589453708347]
This paper presents an innovative solution: an Augmented Reality (AR)-assisted framework for demonstration collection.
We empower non-roboticist users to produce demonstrations for robot IL using devices like the HoloLens 2.
We validate our approach with experiments on three classical robotics tasks: reach, push, and pick-and-place.
arXiv Detail & Related papers (2024-03-20T18:30:12Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Continual Learning from Demonstration of Robotics Skills [5.573543601558405]
Methods for teaching motion skills to robots focus on training for a single skill at a time.
We propose an approach for continual learning from demonstration using hypernetworks and neural ordinary differential equation solvers.
arXiv Detail & Related papers (2022-02-14T16:26:52Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Training Humans to Train Robots Dynamic Motor Skills [5.5586788751870175]
This paper investigates the use of machine teaching to derive an index for determining the quality of demonstrations.
Experiments with a simple learner robot suggest that guidance and training of teachers through the proposed approach can lead to up to 66.5% decrease in error in the learnt skill.
arXiv Detail & Related papers (2021-04-17T19:39:07Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - Interactive Robot Training for Non-Markov Tasks [6.252236971703546]
We propose a Bayesian interactive robot training framework that allows the robot to learn from both demonstrations provided by a teacher.
We also present an active learning approach to identify the task execution with the most uncertain degree of acceptability.
We demonstrate the efficacy of our approach in a real-world setting through a user-study based on teaching a robot to set a dinner table.
arXiv Detail & Related papers (2020-03-04T18:19:05Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.