State-Only Imitation Learning for Dexterous Manipulation
- URL: http://arxiv.org/abs/2004.04650v2
- Date: Wed, 29 Dec 2021 18:31:18 GMT
- Title: State-Only Imitation Learning for Dexterous Manipulation
- Authors: Ilija Radosavovic, Xiaolong Wang, Lerrel Pinto, Jitendra Malik
- Abstract summary: In this paper, we explore state-only imitation learning.
We train an inverse dynamics model and use it to predict actions for state-only demonstrations.
Our method performs on par with state-action approaches and considerably outperforms RL alone.
- Score: 63.03621861920732
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern model-free reinforcement learning methods have recently demonstrated
impressive results on a number of problems. However, complex domains like
dexterous manipulation remain a challenge due to the high sample complexity. To
address this, current approaches employ expert demonstrations in the form of
state-action pairs, which are difficult to obtain for real-world settings such
as learning from videos. In this paper, we move toward a more realistic setting
and explore state-only imitation learning. To tackle this setting, we train an
inverse dynamics model and use it to predict actions for state-only
demonstrations. The inverse dynamics model and the policy are trained jointly.
Our method performs on par with state-action approaches and considerably
outperforms RL alone. By not relying on expert actions, we are able to learn
from demonstrations with different dynamics, morphologies, and objects. Videos
available at https://people.eecs.berkeley.edu/~ilija/soil .
Related papers
- Any-point Trajectory Modeling for Policy Learning [64.23861308947852]
We introduce Any-point Trajectory Modeling (ATM) to predict future trajectories of arbitrary points within a video frame.
ATM outperforms strong video pre-training baselines by 80% on average.
We show effective transfer learning of manipulation skills from human videos and videos from a different robot morphology.
arXiv Detail & Related papers (2023-12-28T23:34:43Z) - Skill Disentanglement for Imitation Learning from Suboptimal
Demonstrations [60.241144377865716]
We consider the imitation of sub-optimal demonstrations, with both a small clean demonstration set and a large noisy set.
We propose method by evaluating and imitating at the sub-demonstration level, encoding action primitives of varying quality into different skills.
arXiv Detail & Related papers (2023-06-13T17:24:37Z) - Inverse Dynamics Pretraining Learns Good Representations for Multitask
Imitation [66.86987509942607]
We evaluate how such a paradigm should be done in imitation learning.
We consider a setting where the pretraining corpus consists of multitask demonstrations.
We argue that inverse dynamics modeling is well-suited to this setting.
arXiv Detail & Related papers (2023-05-26T14:40:46Z) - Out-of-Dynamics Imitation Learning from Multimodal Demonstrations [68.46458026983409]
We study out-of-dynamics imitation learning (OOD-IL), which relaxes the assumption to that the demonstrator and the imitator have the same state spaces.
OOD-IL enables imitation learning to utilize demonstrations from a wide range of demonstrators but introduces a new challenge.
We develop a better transferability measurement to tackle this newly-emerged challenge.
arXiv Detail & Related papers (2022-11-13T07:45:06Z) - Robust Imitation of a Few Demonstrations with a Backwards Model [3.8530020696501794]
Behavior cloning of expert demonstrations can speed up learning optimal policies in a more sample-efficient way than reinforcement learning.
We tackle this issue by extending the region of attraction around the demonstrations so that the agent can learn how to get back onto the demonstrated trajectories if it veers off-course.
With optimal or near-optimal demonstrations, the learned policy will be both optimal and robust to deviations, with a wider region of attraction.
arXiv Detail & Related papers (2022-10-17T18:02:19Z) - Extraneousness-Aware Imitation Learning [25.60384350984274]
Extraneousness-Aware Learning (EIL) learns visuomotor policies from third-person demonstrations with extraneous subsequences.
EIL learns action-conditioned observation embeddings in a self-supervised manner and retrieves task-relevant observations across visual demonstrations.
Experimental results show that EIL outperforms strong baselines and achieves comparable policies to those trained with perfect demonstration.
arXiv Detail & Related papers (2022-10-04T04:42:26Z) - Visual Adversarial Imitation Learning using Variational Models [60.69745540036375]
Reward function specification remains a major impediment for learning behaviors through deep reinforcement learning.
Visual demonstrations of desired behaviors often presents an easier and more natural way to teach agents.
We develop a variational model-based adversarial imitation learning algorithm.
arXiv Detail & Related papers (2021-07-16T00:15:18Z) - Learning from Imperfect Demonstrations from Agents with Varying Dynamics [29.94164262533282]
We develop a metric composed of a feasibility score and an optimality score to measure how useful a demonstration is for imitation learning.
Our experiments on four environments in simulation and on a real robot show improved learned policies with higher expected return.
arXiv Detail & Related papers (2021-03-10T07:39:38Z) - Shaping Rewards for Reinforcement Learning with Imperfect Demonstrations
using Generative Models [18.195406135434503]
We propose a method that combines reinforcement and imitation learning by shaping the reward function with a state-and-action-dependent potential.
We show that this accelerates policy learning by specifying high-value areas of the state and action space that are worth exploring first.
In particular, we examine both normalizing flows and Generative Adversarial Networks to represent these potentials.
arXiv Detail & Related papers (2020-11-02T20:32:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.