MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning
- URL: http://arxiv.org/abs/2304.04668v2
- Date: Tue, 9 Jan 2024 23:43:12 GMT
- Title: MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning
- Authors: Arundhati Banerjee, Soham Phade, Stefano Ermon, Stephan Zheng
- Abstract summary: We study how a principal can efficiently and effectively intervene on the rewards of a previously unseen learning agent in order to induce desirable outcomes.
This is relevant to many real-world settings like auctions or taxation, where the principal may not know the learning behavior nor the rewards of real people.
We introduce MERMAIDE, a model-based meta-learning framework to train a principal that can quickly adapt to out-of-distribution agents.
- Score: 62.065503126104126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study how a principal can efficiently and effectively intervene on the
rewards of a previously unseen learning agent in order to induce desirable
outcomes. This is relevant to many real-world settings like auctions or
taxation, where the principal may not know the learning behavior nor the
rewards of real people. Moreover, the principal should be few-shot adaptable
and minimize the number of interventions, because interventions are often
costly. We introduce MERMAIDE, a model-based meta-learning framework to train a
principal that can quickly adapt to out-of-distribution agents with different
learning strategies and reward functions. We validate this approach
step-by-step. First, in a Stackelberg setting with a best-response agent, we
show that meta-learning enables quick convergence to the theoretically known
Stackelberg equilibrium at test time, although noisy observations severely
increase the sample complexity. We then show that our model-based meta-learning
approach is cost-effective in intervening on bandit agents with unseen
explore-exploit strategies. Finally, we outperform baselines that use either
meta-learning or agent behavior modeling, in both $0$-shot and $K=1$-shot
settings with partial agent information.
Related papers
- Discovering How Agents Learn Using Few Data [32.38609641970052]
We propose a theoretical and algorithmic framework for real-time identification of agent behavior using a short burst of a single system trajectory.
Our approach accurately recovers the true dynamics across various benchmarks, including equilibrium selection and prediction of chaotic systems up to 10 Lynov times.
These findings suggest that our approach has significant potential to support effective policy and decision-making in strategic multi-agent systems.
arXiv Detail & Related papers (2023-07-13T09:14:48Z) - Meta-Learning with Self-Improving Momentum Target [72.98879709228981]
We propose Self-improving Momentum Target (SiMT) to improve the performance of a meta-learner.
SiMT generates the target model by adapting from the temporal ensemble of the meta-learner.
We show that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods.
arXiv Detail & Related papers (2022-10-11T06:45:15Z) - On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning [71.55412580325743]
We show that multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation.
This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL.
arXiv Detail & Related papers (2022-06-07T13:24:00Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z) - On Fast Adversarial Robustness Adaptation in Model-Agnostic
Meta-Learning [100.14809391594109]
Model-agnostic meta-learning (MAML) has emerged as one of the most successful meta-learning techniques in few-shot learning.
Despite the generalization power of the meta-model, it remains elusive that how adversarial robustness can be maintained by MAML in few-shot learning.
We propose a general but easily-optimized robustness-regularized meta-learning framework, which allows the use of unlabeled data augmentation, fast adversarial attack generation, and computationally-light fine-tuning.
arXiv Detail & Related papers (2021-02-20T22:03:04Z) - Deep Interactive Bayesian Reinforcement Learning via Meta-Learning [63.96201773395921]
The optimal adaptive behaviour under uncertainty over the other agents' strategies can be computed using the Interactive Bayesian Reinforcement Learning framework.
We propose to meta-learn approximate belief inference and Bayes-optimal behaviour for a given prior.
We show empirically that our approach outperforms existing methods that use a model-free approach, sample from the approximate posterior, maintain memory-free models of others, or do not fully utilise the known structure of the environment.
arXiv Detail & Related papers (2021-01-11T13:25:13Z) - A Primal-Dual Subgradient Approachfor Fair Meta Learning [23.65344558042896]
Few shot meta-learning is well-known with its fast-adapted capability and accuracy generalization onto unseen tasks.
We propose a Primal-Dual Fair Meta-learning framework, namely PDFM, which learns to train fair machine learning models using only a few examples.
arXiv Detail & Related papers (2020-09-26T19:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.