Imitation Learning in the Deep Learning Era: A Novel Taxonomy and Recent Advances
- URL: http://arxiv.org/abs/2511.03565v1
- Date: Wed, 05 Nov 2025 15:47:29 GMT
- Title: Imitation Learning in the Deep Learning Era: A Novel Taxonomy and Recent Advances
- Authors: Iason Chrysomallis, Georgios Chalkiadakis,
- Abstract summary: Imitation learning (IL) enables agents to acquire skills by observing and replicating the behavior of one or multiple experts.<n>We review the latest advances in imitation learning research, highlighting recent trends, methodological innovations, and practical applications.
- Score: 3.691573844585973
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Imitation learning (IL) enables agents to acquire skills by observing and replicating the behavior of one or multiple experts. In recent years, advances in deep learning have significantly expanded the capabilities and scalability of imitation learning across a range of domains, where expert data can range from full state-action trajectories to partial observations or unlabeled sequences. Alongside this growth, novel approaches have emerged, with new methodologies being developed to address longstanding challenges such as generalization, covariate shift, and demonstration quality. In this survey, we review the latest advances in imitation learning research, highlighting recent trends, methodological innovations, and practical applications. We propose a novel taxonomy that is distinct from existing categorizations to better reflect the current state of the IL research stratum and its trends. Throughout the survey, we critically examine the strengths, limitations, and evaluation practices of representative works, and we outline key challenges and open directions for future research.
Related papers
- A Survey on Generative Model Unlearning: Fundamentals, Taxonomy, Evaluation, and Future Direction [21.966560704390716]
We review current research on Generative Model Unlearning (GenMU)<n>We propose a unified analytical framework for categorizing unlearning objectives, methodological strategies, and evaluation metrics.<n>We highlight the potential practical value of unlearning techniques in real-world applications.
arXiv Detail & Related papers (2025-07-26T09:49:57Z) - On the Resurgence of Recurrent Models for Long Sequences -- Survey and
Research Opportunities in the Transformer Era [59.279784235147254]
This survey is aimed at providing an overview of these trends framed under the unifying umbrella of Recurrence.
It emphasizes novel research opportunities that become prominent when abandoning the idea of processing long sequences.
arXiv Detail & Related papers (2024-02-12T23:55:55Z) - A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning [58.107474025048866]
Forgetting refers to the loss or deterioration of previously acquired knowledge.
Forgetting is a prevalent phenomenon observed in various other research domains within deep learning.
arXiv Detail & Related papers (2023-07-16T16:27:58Z) - Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery [76.63807209414789]
We challenge the status quo in class-iNCD and propose a learning paradigm where class discovery occurs continuously and truly unsupervisedly.
We propose simple baselines, composed of a frozen PTM backbone and a learnable linear classifier, that are not only simple to implement but also resilient under longer learning scenarios.
arXiv Detail & Related papers (2023-03-28T13:47:16Z) - Label-efficient Time Series Representation Learning: A Review [19.218833228063392]
Label-efficient time series representation learning is crucial for deploying deep learning models in real-world applications.
To address the scarcity of labeled time series data, various strategies, e.g., transfer learning, self-supervised learning, and semi-supervised learning, have been developed.
We introduce a novel taxonomy for the first time, categorizing existing approaches as in-domain or cross-domain, based on their reliance on external data sources.
arXiv Detail & Related papers (2023-02-13T15:12:15Z) - Knowledge-enhanced Neural Machine Reasoning: A Review [67.51157900655207]
We introduce a novel taxonomy that categorizes existing knowledge-enhanced methods into two primary categories and four subcategories.
We elucidate the current application domains and provide insight into promising prospects for future research.
arXiv Detail & Related papers (2023-02-04T04:54:30Z) - A Comprehensive Survey of Continual Learning: Theory, Method and
Application [64.23253420555989]
We present a comprehensive survey of continual learning, seeking to bridge the basic settings, theoretical foundations, representative methods, and practical applications.
We summarize the general objectives of continual learning as ensuring a proper stability-plasticity trade-off and an adequate intra/inter-task generalizability in the context of resource efficiency.
arXiv Detail & Related papers (2023-01-31T11:34:56Z) - An information-theoretic perspective on intrinsic motivation in
reinforcement learning: a survey [0.0]
We propose to survey these research works through a new taxonomy based on information theory.
We computationally revisit the notions of surprise, novelty and skill learning.
Our analysis suggests that novelty and surprise can assist the building of a hierarchy of transferable skills.
arXiv Detail & Related papers (2022-09-19T09:47:43Z) - Causal Reasoning Meets Visual Representation Learning: A Prospective
Study [117.08431221482638]
Lack of interpretability, robustness, and out-of-distribution generalization are becoming the challenges of the existing visual models.
Inspired by the strong inference ability of human-level agents, recent years have witnessed great effort in developing causal reasoning paradigms.
This paper aims to provide a comprehensive overview of this emerging field, attract attention, encourage discussions, bring to the forefront the urgency of developing novel causal reasoning methods.
arXiv Detail & Related papers (2022-04-26T02:22:28Z) - Recent Few-Shot Object Detection Algorithms: A Survey with Performance
Comparison [54.357707168883024]
Few-Shot Object Detection (FSOD) mimics the humans' ability of learning to learn.
FSOD intelligently transfers the learned generic object knowledge from the common heavy-tailed, to the novel long-tailed object classes.
We give an overview of FSOD, including the problem definition, common datasets, and evaluation protocols.
arXiv Detail & Related papers (2022-03-27T04:11:28Z) - Deep Long-Tailed Learning: A Survey [163.16874896812885]
Deep long-tailed learning aims to train well-performing deep models from a large number of images that follow a long-tailed class distribution.
Long-tailed class imbalance is a common problem in practical visual recognition tasks.
This paper provides a comprehensive survey on recent advances in deep long-tailed learning.
arXiv Detail & Related papers (2021-10-09T15:25:22Z) - Neuro-evolutionary Frameworks for Generalized Learning Agents [1.2691047660244335]
Recent successes of deep learning and deep reinforcement learning have firmly established their statuses as state-of-the-art artificial learning techniques.
Longstanding drawbacks of these approaches point to a need for re-thinking the way such systems are designed and deployed.
We discuss the anticipated improvements from such neuro-evolutionary frameworks, along with the associated challenges.
arXiv Detail & Related papers (2020-02-04T02:11:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.