Out-of-Dynamics Imitation Learning from Multimodal Demonstrations
- URL: http://arxiv.org/abs/2211.06839v1
- Date: Sun, 13 Nov 2022 07:45:06 GMT
- Title: Out-of-Dynamics Imitation Learning from Multimodal Demonstrations
- Authors: Yiwen Qiu, Jialong Wu, Zhangjie Cao, Mingsheng Long
- Abstract summary: We study out-of-dynamics imitation learning (OOD-IL), which relaxes the assumption to that the demonstrator and the imitator have the same state spaces.
OOD-IL enables imitation learning to utilize demonstrations from a wide range of demonstrators but introduces a new challenge.
We develop a better transferability measurement to tackle this newly-emerged challenge.
- Score: 68.46458026983409
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing imitation learning works mainly assume that the demonstrator who
collects demonstrations shares the same dynamics as the imitator. However, the
assumption limits the usage of imitation learning, especially when collecting
demonstrations for the imitator is difficult. In this paper, we study
out-of-dynamics imitation learning (OOD-IL), which relaxes the assumption to
that the demonstrator and the imitator have the same state spaces but could
have different action spaces and dynamics. OOD-IL enables imitation learning to
utilize demonstrations from a wide range of demonstrators but introduces a new
challenge: some demonstrations cannot be achieved by the imitator due to the
different dynamics. Prior works try to filter out such demonstrations by
feasibility measurements, but ignore the fact that the demonstrations exhibit a
multimodal distribution since the different demonstrators may take different
policies in different dynamics. We develop a better transferability measurement
to tackle this newly-emerged challenge. We firstly design a novel
sequence-based contrastive clustering algorithm to cluster demonstrations from
the same mode to avoid the mutual interference of demonstrations from different
modes, and then learn the transferability of each demonstration with an
adversarial-learning based algorithm in each cluster. Experiment results on
several MuJoCo environments, a driving environment, and a simulated robot
environment show that the proposed transferability measurement more accurately
finds and down-weights non-transferable demonstrations and outperforms prior
works on the final imitation learning performance. We show the videos of our
experiment results on our website.
Related papers
- Skill Disentanglement for Imitation Learning from Suboptimal
Demonstrations [60.241144377865716]
We consider the imitation of sub-optimal demonstrations, with both a small clean demonstration set and a large noisy set.
We propose method by evaluating and imitating at the sub-demonstration level, encoding action primitives of varying quality into different skills.
arXiv Detail & Related papers (2023-06-13T17:24:37Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - Leveraging Demonstrations with Latent Space Priors [90.56502305574665]
We propose to leverage demonstration datasets by combining skill learning and sequence modeling.
We show how to acquire such priors from state-only motion capture demonstrations and explore several methods for integrating them into policy learning.
Our experimental results confirm that latent space priors provide significant gains in learning speed and final performance in a set of challenging sparse-reward environments.
arXiv Detail & Related papers (2022-10-26T13:08:46Z) - Robustness of Demonstration-based Learning Under Limited Data Scenario [54.912936555876826]
Demonstration-based learning has shown great potential in stimulating pretrained language models' ability under limited data scenario.
Why such demonstrations are beneficial for the learning process remains unclear since there is no explicit alignment between the demonstrations and the predictions.
In this paper, we design pathological demonstrations by gradually removing intuitively useful information from the standard ones to take a deep dive of the robustness of demonstration-based sequence labeling.
arXiv Detail & Related papers (2022-10-19T16:15:04Z) - Eliciting Compatible Demonstrations for Multi-Human Imitation Learning [16.11830547863391]
Imitation learning from human-provided demonstrations is a strong approach for learning policies for robot manipulation.
Natural human behavior has a great deal of heterogeneity, with several optimal ways to demonstrate a task.
This mismatch presents a problem for interactive imitation learning, where sequences of users improve on a policy by iteratively collecting new, possibly conflicting demonstrations.
We show that we can both identify incompatible demonstrations via post-hoc filtering, and apply our compatibility measure to actively elicit compatible demonstrations from new users.
arXiv Detail & Related papers (2022-10-14T19:37:55Z) - Extraneousness-Aware Imitation Learning [25.60384350984274]
Extraneousness-Aware Learning (EIL) learns visuomotor policies from third-person demonstrations with extraneous subsequences.
EIL learns action-conditioned observation embeddings in a self-supervised manner and retrieves task-relevant observations across visual demonstrations.
Experimental results show that EIL outperforms strong baselines and achieves comparable policies to those trained with perfect demonstration.
arXiv Detail & Related papers (2022-10-04T04:42:26Z) - Learning Feasibility to Imitate Demonstrators with Different Dynamics [23.239058855103067]
The goal of learning from demonstrations is to learn a policy for an agent (imitator) by mimicking the behavior in the demonstrations.
We learn a feasibility metric that captures the likelihood of a demonstration being feasible by the imitator.
Our experiments on four simulated environments and on a real robot show that the policy learned with our approach achieves a higher expected return than prior works.
arXiv Detail & Related papers (2021-10-28T14:15:47Z) - Learning from Imperfect Demonstrations from Agents with Varying Dynamics [29.94164262533282]
We develop a metric composed of a feasibility score and an optimality score to measure how useful a demonstration is for imitation learning.
Our experiments on four environments in simulation and on a real robot show improved learned policies with higher expected return.
arXiv Detail & Related papers (2021-03-10T07:39:38Z) - Reinforcement Learning with Supervision from Noisy Demonstrations [38.00968774243178]
We propose a novel framework to adaptively learn the policy by jointly interacting with the environment and exploiting the expert demonstrations.
Experimental results in various environments with multiple popular reinforcement learning algorithms show that the proposed approach can learn robustly with noisy demonstrations.
arXiv Detail & Related papers (2020-06-14T06:03:06Z) - State-Only Imitation Learning for Dexterous Manipulation [63.03621861920732]
In this paper, we explore state-only imitation learning.
We train an inverse dynamics model and use it to predict actions for state-only demonstrations.
Our method performs on par with state-action approaches and considerably outperforms RL alone.
arXiv Detail & Related papers (2020-04-07T17:57:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.