Curriculum-Based Imitation of Versatile Skills
- URL: http://arxiv.org/abs/2304.05171v1
- Date: Tue, 11 Apr 2023 12:10:41 GMT
- Title: Curriculum-Based Imitation of Versatile Skills
- Authors: Maximilian Xiling Li, Onur Celik, Philipp Becker, Denis Blessing,
Rudolf Lioutikov, Gerhard Neumann
- Abstract summary: Learning skills by imitation is a promising concept for the intuitive teaching of robots.
A common way to learn such skills is to learn a parametric model by maximizing the likelihood given the demonstrations.
Yet, human demonstrations are often multi-modal, i.e., the same task is solved in multiple ways.
- Score: 15.97723808124603
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Learning skills by imitation is a promising concept for the intuitive
teaching of robots. A common way to learn such skills is to learn a parametric
model by maximizing the likelihood given the demonstrations. Yet, human
demonstrations are often multi-modal, i.e., the same task is solved in multiple
ways which is a major challenge for most imitation learning methods that are
based on such a maximum likelihood (ML) objective. The ML objective forces the
model to cover all data, it prevents specialization in the context space and
can cause mode-averaging in the behavior space, leading to suboptimal or
potentially catastrophic behavior. Here, we alleviate those issues by
introducing a curriculum using a weight for each data point, allowing the model
to specialize on data it can represent while incentivizing it to cover as much
data as possible by an entropy bonus. We extend our algorithm to a Mixture of
(linear) Experts (MoE) such that the single components can specialize on local
context regions, while the MoE covers all data points. We evaluate our approach
in complex simulated and real robot control tasks and show it learns from
versatile human demonstrations and significantly outperforms current SOTA
methods. A reference implementation can be found at
https://github.com/intuitive-robots/ml-cur
Related papers
- EquiBot: SIM(3)-Equivariant Diffusion Policy for Generalizable and Data Efficient Learning [36.0274770291531]
We propose Equibot, a robust, data-efficient, and generalizable approach for robot manipulation task learning.
Our approach combines SIM(3)-equivariant neural network architectures with diffusion models.
We show that our method can easily generalize to novel objects and scenes after learning from just 5 minutes of human demonstrations in each task.
arXiv Detail & Related papers (2024-07-01T17:09:43Z) - Video-Language Critic: Transferable Reward Functions for Language-Conditioned Robotics [25.2461925479135]
Video-Language Critic is a reward model that can be trained on readily available cross-embodiment data.
Our model enables 2x more sample-efficient policy training on Meta-World tasks than a sparse reward only.
arXiv Detail & Related papers (2024-05-30T12:18:06Z) - Any-point Trajectory Modeling for Policy Learning [64.23861308947852]
We introduce Any-point Trajectory Modeling (ATM) to predict future trajectories of arbitrary points within a video frame.
ATM outperforms strong video pre-training baselines by 80% on average.
We show effective transfer learning of manipulation skills from human videos and videos from a different robot morphology.
arXiv Detail & Related papers (2023-12-28T23:34:43Z) - Exploring intra-task relations to improve meta-learning algorithms [1.223779595809275]
We aim to exploit external knowledge of task relations to improve training stability via effective mini-batching of tasks.
We hypothesize that selecting a diverse set of tasks in a mini-batch will lead to a better estimate of the full gradient and hence will lead to a reduction of noise in training.
arXiv Detail & Related papers (2023-12-27T15:33:52Z) - Learn to Unlearn for Deep Neural Networks: Minimizing Unlearning
Interference with Gradient Projection [56.292071534857946]
Recent data-privacy laws have sparked interest in machine unlearning.
Challenge is to discard information about the forget'' data without altering knowledge about remaining dataset.
We adopt a projected-gradient based learning method, named as Projected-Gradient Unlearning (PGU)
We provide empirically evidence to demonstrate that our unlearning method can produce models that behave similar to models retrained from scratch across various metrics even when the training dataset is no longer accessible.
arXiv Detail & Related papers (2023-12-07T07:17:24Z) - Transferring Foundation Models for Generalizable Robotic Manipulation [82.12754319808197]
We propose a novel paradigm that effectively leverages language-reasoning segmentation mask generated by internet-scale foundation models.
Our approach can effectively and robustly perceive object pose and enable sample-efficient generalization learning.
Demos can be found in our submitted video, and more comprehensive ones can be found in link1 or link2.
arXiv Detail & Related papers (2023-06-09T07:22:12Z) - Information Maximizing Curriculum: A Curriculum-Based Approach for
Imitating Diverse Skills [14.685043874797742]
We propose a curriculum-based approach that assigns a weight to each data point and encourages the model to specialize in the data it can represent.
To cover all modes and thus, enable diverse behavior, we extend our approach to a mixture of experts (MoE) policy, where each mixture component selects its own subset of the training data for learning.
A novel, maximum entropy-based objective is proposed to achieve full coverage of the dataset, thereby enabling the policy to encompass all modes within the data distribution.
arXiv Detail & Related papers (2023-03-27T16:02:50Z) - Masked Imitation Learning: Discovering Environment-Invariant Modalities
in Multimodal Demonstrations [37.33625951008865]
Extraneous data modalities can lead to state over-specification.
State over-specification leads to issues such as the learned policy not generalizing outside of the training data distribution.
We develop a bi-level optimization algorithm that learns this mask to accurately filter over-specified modalities.
arXiv Detail & Related papers (2022-09-16T02:45:13Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - Model-Based Visual Planning with Self-Supervised Functional Distances [104.83979811803466]
We present a self-supervised method for model-based visual goal reaching.
Our approach learns entirely using offline, unlabeled data.
We find that this approach substantially outperforms both model-free and model-based prior methods.
arXiv Detail & Related papers (2020-12-30T23:59:09Z) - COG: Connecting New Skills to Past Experience with Offline Reinforcement
Learning [78.13740204156858]
We show that we can reuse prior data to extend new skills simply through dynamic programming.
We demonstrate the effectiveness of our approach by chaining together several behaviors seen in prior datasets for solving a new task.
We train our policies in an end-to-end fashion, mapping high-dimensional image observations to low-level robot control commands.
arXiv Detail & Related papers (2020-10-27T17:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.