Curriculum in Gradient-Based Meta-Reinforcement Learning
- URL: http://arxiv.org/abs/2002.07956v1
- Date: Wed, 19 Feb 2020 01:40:45 GMT
- Title: Curriculum in Gradient-Based Meta-Reinforcement Learning
- Authors: Bhairav Mehta, Tristan Deleu, Sharath Chandra Raparthy, Chris J. Pal,
Liam Paull
- Abstract summary: We show that gradient-based meta-learners are sensitive to task distributions.
With the wrong curriculum, agents suffer the effects of meta-overfitting, shallow adaptation, and adaptation instability.
- Score: 10.447238563837173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gradient-based meta-learners such as Model-Agnostic Meta-Learning (MAML) have
shown strong few-shot performance in supervised and reinforcement learning
settings. However, specifically in the case of meta-reinforcement learning
(meta-RL), we can show that gradient-based meta-learners are sensitive to task
distributions. With the wrong curriculum, agents suffer the effects of
meta-overfitting, shallow adaptation, and adaptation instability. In this work,
we begin by highlighting intriguing failure cases of gradient-based meta-RL and
show that task distributions can wildly affect algorithmic outputs, stability,
and performance. To address this problem, we leverage insights from recent
literature on domain randomization and propose meta Active Domain Randomization
(meta-ADR), which learns a curriculum of tasks for gradient-based meta-RL in a
similar as ADR does for sim2real transfer. We show that this approach induces
more stable policies on a variety of simulated locomotion and navigation tasks.
We assess in- and out-of-distribution generalization and find that the learned
task distributions, even in an unstructured task space, greatly improve the
adaptation performance of MAML. Finally, we motivate the need for better
benchmarking in meta-RL that prioritizes \textit{generalization} over
single-task adaption performance.
Related papers
- Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning [61.8360232713375]
We propose a reinforcement-based multi-source meta-transfer learning framework (Meta-RTL) for low-resource commonsense reasoning.
We present a reinforcement-based approach to dynamically estimating source task weights that measure the contribution of the corresponding tasks to the target task in the meta-transfer learning.
Experimental results demonstrate that Meta-RTL substantially outperforms strong baselines and previous task selection strategies.
arXiv Detail & Related papers (2024-09-27T18:22:22Z) - Task-Distributionally Robust Data-Free Meta-Learning [99.56612787882334]
Data-Free Meta-Learning (DFML) aims to efficiently learn new tasks by leveraging multiple pre-trained models without requiring their original training data.
For the first time, we reveal two major challenges hindering their practical deployments: Task-Distribution Shift ( TDS) and Task-Distribution Corruption (TDC)
arXiv Detail & Related papers (2023-11-23T15:46:54Z) - Meta-Reinforcement Learning Based on Self-Supervised Task Representation
Learning [23.45043290237396]
MoSS is a context-based Meta-reinforcement learning algorithm based on Self-Supervised task representation learning.
On MuJoCo and Meta-World benchmarks, MoSS outperforms prior in terms of performance, sample efficiency (3-50x faster), adaptation efficiency, and generalization.
arXiv Detail & Related papers (2023-04-29T15:46:19Z) - Train Hard, Fight Easy: Robust Meta Reinforcement Learning [78.16589993684698]
A major challenge of reinforcement learning (RL) in real-world applications is the variation between environments, tasks or clients.
Standard MRL methods optimize the average return over tasks, but often suffer from poor results in tasks of high risk or difficulty.
In this work, we define a robust MRL objective with a controlled level.
The data inefficiency is addressed via the novel Robust Meta RL algorithm (RoML)
arXiv Detail & Related papers (2023-01-26T14:54:39Z) - Meta-Learning with Self-Improving Momentum Target [72.98879709228981]
We propose Self-improving Momentum Target (SiMT) to improve the performance of a meta-learner.
SiMT generates the target model by adapting from the temporal ensemble of the meta-learner.
We show that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods.
arXiv Detail & Related papers (2022-10-11T06:45:15Z) - Learning Action Translator for Meta Reinforcement Learning on
Sparse-Reward Tasks [56.63855534940827]
This work introduces a novel objective function to learn an action translator among training tasks.
We theoretically verify that the value of the transferred policy with the action translator can be close to the value of the source policy.
We propose to combine the action translator with context-based meta-RL algorithms for better data collection and more efficient exploration during meta-training.
arXiv Detail & Related papers (2022-07-19T04:58:06Z) - Meta-Learning with Neural Tangent Kernels [58.06951624702086]
We propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK)
Within this paradigm, we introduce two meta-learning algorithms, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework.
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
arXiv Detail & Related papers (2021-02-07T20:53:23Z) - Transfer Meta-Learning: Information-Theoretic Bounds and Information
Meta-Risk Minimization [47.7605527786164]
Meta-learning automatically infers an inductive bias by observing data from a number of related tasks.
We introduce the problem of transfer meta-learning, in which tasks are drawn from a target task environment during meta-testing.
arXiv Detail & Related papers (2020-11-04T12:55:43Z) - Model-based Adversarial Meta-Reinforcement Learning [38.28304764312512]
We propose Model-based Adversarial Meta-Reinforcement Learning (AdMRL)
AdMRL aims to minimize the worst-case sub-optimality gap across all tasks in a family of tasks.
We evaluate our approach on several continuous control benchmarks and demonstrate its efficacy in the worst-case performance over all tasks.
arXiv Detail & Related papers (2020-06-16T02:21:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.