Interactive Robot Training for Non-Markov Tasks
- URL: http://arxiv.org/abs/2003.02232v2
- Date: Sat, 28 Nov 2020 17:03:57 GMT
- Title: Interactive Robot Training for Non-Markov Tasks
- Authors: Ankit Shah, Samir Wadhwania, Julie Shah
- Abstract summary: We propose a Bayesian interactive robot training framework that allows the robot to learn from both demonstrations provided by a teacher.
We also present an active learning approach to identify the task execution with the most uncertain degree of acceptability.
We demonstrate the efficacy of our approach in a real-world setting through a user-study based on teaching a robot to set a dinner table.
- Score: 6.252236971703546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Defining sound and complete specifications for robots using formal languages
is challenging, while learning formal specifications directly from
demonstrations can lead to over-constrained task policies. In this paper, we
propose a Bayesian interactive robot training framework that allows the robot
to learn from both demonstrations provided by a teacher, and that teacher's
assessments of the robot's task executions. We also present an active learning
approach -- inspired by uncertainty sampling -- to identify the task execution
with the most uncertain degree of acceptability. Through a simulated
experiment, we demonstrate that our active learning approach identifies a
teacher's intended task specification with an equivalent or greater similarity
when compared to an approach that learns purely from demonstrations. Finally,
we demonstrate the efficacy of our approach in a real-world setting through a
user-study based on teaching a robot to set a dinner table.
Related papers
- Self-Explainable Affordance Learning with Embodied Caption [63.88435741872204]
We introduce Self-Explainable Affordance learning (SEA) with embodied caption.
SEA enables robots to articulate their intentions and bridge the gap between explainable vision-language caption and visual affordance learning.
We propose a novel model to effectively combine affordance grounding with self-explanation in a simple but efficient manner.
arXiv Detail & Related papers (2024-04-08T15:22:38Z) - How Can Everyday Users Efficiently Teach Robots by Demonstrations? [3.6145826787059643]
We propose to use a measure of uncertainty, namely task-related information entropy, as a criterion for suggesting informative demonstration examples to human teachers.
The results indicated a substantial improvement in robot learning efficiency from the teacher's demonstrations.
arXiv Detail & Related papers (2023-10-19T18:21:39Z) - Proactive Human-Robot Interaction using Visuo-Lingual Transformers [0.0]
Humans possess the innate ability to extract latent visuo-lingual cues to infer context through human interaction.
We propose a learning-based method that uses visual cues from the scene, lingual commands from a user and knowledge of prior object-object interaction to identify and proactively predict the underlying goal the user intends to achieve.
arXiv Detail & Related papers (2023-10-04T00:50:21Z) - Continual Robot Learning using Self-Supervised Task Inference [19.635428830237842]
We propose a self-supervised task inference approach to continually learn new tasks.
We use a behavior-matching self-supervised learning objective to train a novel Task Inference Network (TINet)
A multi-task policy is built on top of the TINet and trained with reinforcement learning to optimize performance over tasks.
arXiv Detail & Related papers (2023-09-10T09:32:35Z) - Learning Video-Conditioned Policies for Unseen Manipulation Tasks [83.2240629060453]
Video-conditioned Policy learning maps human demonstrations of previously unseen tasks to robot manipulation skills.
We learn our policy to generate appropriate actions given current scene observations and a video of the target task.
We validate our approach on a set of challenging multi-task robot manipulation environments and outperform state of the art.
arXiv Detail & Related papers (2023-05-10T16:25:42Z) - Dexterous Manipulation from Images: Autonomous Real-World RL via Substep
Guidance [71.36749876465618]
We describe a system for vision-based dexterous manipulation that provides a "programming-free" approach for users to define new tasks.
Our system includes a framework for users to define a final task and intermediate sub-tasks with image examples.
experimental results with a four-finger robotic hand learning multi-stage object manipulation tasks directly in the real world.
arXiv Detail & Related papers (2022-12-19T22:50:40Z) - Autonomous Assessment of Demonstration Sufficiency via Bayesian Inverse
Reinforcement Learning [22.287031690633174]
We propose a novel self-assessment approach based on inverse reinforcement learning and value-at-risk.
We show that our approach successfully enables robots to perform at users' desired performance levels.
arXiv Detail & Related papers (2022-11-28T16:48:24Z) - Summarizing a virtual robot's past actions in natural language [0.3553493344868413]
We show how a popular dataset that matches robot actions with natural language descriptions designed for an instruction following task can be repurposed to serve as a training ground for robot action summarization work.
We propose and test several methods of learning to generate such summaries, starting from either egocentric video frames of the robot taking actions or intermediate text representations of the actions used by an automatic planner.
arXiv Detail & Related papers (2022-03-13T15:00:46Z) - BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning [108.41464483878683]
We study the problem of enabling a vision-based robotic manipulation system to generalize to novel tasks.
We develop an interactive and flexible imitation learning system that can learn from both demonstrations and interventions.
When scaling data collection on a real robot to more than 100 distinct tasks, we find that this system can perform 24 unseen manipulation tasks with an average success rate of 44%.
arXiv Detail & Related papers (2022-02-04T07:30:48Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - Scalable Multi-Task Imitation Learning with Autonomous Improvement [159.9406205002599]
We build an imitation learning system that can continuously improve through autonomous data collection.
We leverage the robot's own trials as demonstrations for tasks other than the one that the robot actually attempted.
In contrast to prior imitation learning approaches, our method can autonomously collect data with sparse supervision for continuous improvement.
arXiv Detail & Related papers (2020-02-25T18:56:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.