Imitation Learning: Progress, Taxonomies and Opportunities
- URL: http://arxiv.org/abs/2106.12177v1
- Date: Wed, 23 Jun 2021 05:55:33 GMT
- Title: Imitation Learning: Progress, Taxonomies and Opportunities
- Authors: Boyuan Zheng, Sunny Verma, Jianlong Zhou, Ivor Tsang, Fang Chen
- Abstract summary: Imitation learning aims to extract knowledge from human experts' demonstrations or artificially created agents in order to replicate their behaviors.
Its success has been demonstrated in areas such as video games, autonomous driving, robotic simulations and object manipulation.
Most trained agents are limited to perform well in task-specific environments.
- Score: 8.362917578701563
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Imitation learning aims to extract knowledge from human experts'
demonstrations or artificially created agents in order to replicate their
behaviors. Its success has been demonstrated in areas such as video games,
autonomous driving, robotic simulations and object manipulation. However, this
replicating process could be problematic, such as the performance is highly
dependent on the demonstration quality, and most trained agents are limited to
perform well in task-specific environments. In this survey, we provide a
systematic review on imitation learning. We first introduce the background
knowledge from development history and preliminaries, followed by presenting
different taxonomies within Imitation Learning and key milestones of the field.
We then detail challenges in learning strategies and present research
opportunities with learning policy from suboptimal demonstration, voice
instructions and other associated optimization schemes.
Related papers
- Exploring CausalWorld: Enhancing robotic manipulation via knowledge transfer and curriculum learning [6.683222869973898]
This study explores a learning-based tri-finger robotic arm manipulating task, which requires complex movements and coordination among the fingers.
By employing reinforcement learning, we train an agent to acquire the necessary skills for proficient manipulation.
Two knowledge transfer strategies, fine-tuning and curriculum learning, were utilized within the soft actor-critic architecture.
arXiv Detail & Related papers (2024-03-25T23:19:19Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Learning to Discern: Imitating Heterogeneous Human Demonstrations with
Preference and Representation Learning [12.4468604987226]
This paper introduces Learning to Discern (L2D), an offline imitation learning framework for learning from demonstrations with diverse quality and style.
We show that L2D can effectively assess and learn from varying demonstrations, thereby leading to improved policy performance across a range of tasks in both simulations and on a physical robot.
arXiv Detail & Related papers (2023-10-22T06:08:55Z) - A Survey of Imitation Learning: Algorithms, Recent Developments, and
Challenges [9.288673880680033]
imitation learning (IL) is a process where desired behavior is learned by imitating an expert's behavior.
This paper aims to provide an introduction to IL and an overview of its underlying assumptions and approaches.
It also offers a detailed description of recent advances and emerging areas of research in the field.
arXiv Detail & Related papers (2023-09-05T11:56:07Z) - Skill Disentanglement for Imitation Learning from Suboptimal
Demonstrations [60.241144377865716]
We consider the imitation of sub-optimal demonstrations, with both a small clean demonstration set and a large noisy set.
We propose method by evaluating and imitating at the sub-demonstration level, encoding action primitives of varying quality into different skills.
arXiv Detail & Related papers (2023-06-13T17:24:37Z) - A Survey of Demonstration Learning [0.0]
Demonstration Learning is a paradigm in which an agent learns to perform a task by imitating the behavior of an expert shown in demonstrations.
It is gaining significant traction due to having tremendous potential for learning complex behaviors from demonstrations.
Due to learning without interacting with the environment, demonstration learning would allow the automation of a wide range of real world applications such as robotics and healthcare.
arXiv Detail & Related papers (2023-03-20T15:22:10Z) - Procedure Planning in Instructional Videosvia Contextual Modeling and
Model-based Policy Learning [114.1830997893756]
This work focuses on learning a model to plan goal-directed actions in real-life videos.
We propose novel algorithms to model human behaviors through Bayesian Inference and model-based Imitation Learning.
arXiv Detail & Related papers (2021-10-05T01:06:53Z) - Bottom-Up Skill Discovery from Unsegmented Demonstrations for
Long-Horizon Robot Manipulation [55.31301153979621]
We tackle real-world long-horizon robot manipulation tasks through skill discovery.
We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations.
Our method has shown superior performance over state-of-the-art imitation learning methods in multi-stage manipulation tasks.
arXiv Detail & Related papers (2021-09-28T16:18:54Z) - What Matters in Learning from Offline Human Demonstrations for Robot
Manipulation [64.43440450794495]
We conduct an extensive study of six offline learning algorithms for robot manipulation.
Our study analyzes the most critical challenges when learning from offline human data.
We highlight opportunities for learning from human datasets.
arXiv Detail & Related papers (2021-08-06T20:48:30Z) - Active Hierarchical Imitation and Reinforcement Learning [0.0]
In this project, we explored different imitation learning algorithms and designed active learning algorithms upon the hierarchical imitation and reinforcement learning framework we have developed.
Our experimental results showed that using DAgger and reward-based active learning method can achieve better performance while saving more human efforts physically and mentally during the training process.
arXiv Detail & Related papers (2020-12-14T08:27:27Z) - State-Only Imitation Learning for Dexterous Manipulation [63.03621861920732]
In this paper, we explore state-only imitation learning.
We train an inverse dynamics model and use it to predict actions for state-only demonstrations.
Our method performs on par with state-action approaches and considerably outperforms RL alone.
arXiv Detail & Related papers (2020-04-07T17:57:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.