Meta-SysId: A Meta-Learning Approach for Simultaneous Identification and
Prediction
- URL: http://arxiv.org/abs/2206.00694v1
- Date: Wed, 1 Jun 2022 18:04:22 GMT
- Title: Meta-SysId: A Meta-Learning Approach for Simultaneous Identification and
Prediction
- Authors: Junyoung Park, Federico Berto, Arec Jamgochian, Mykel J. Kochenderfer,
and Jinkyoo Park
- Abstract summary: We propose Meta-SysId, a meta-learning approach to model systems governed by common but unknown laws and that differentiate themselves by their context.
We test Meta-SysId on regression, time-series prediction, model-based control, and real-world traffic prediction domains, empirically finding it outperforms or is competitive with meta-learning baselines.
- Score: 34.83805457857297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose Meta-SysId, a meta-learning approach to model sets
of systems that have behavior governed by common but unknown laws and that
differentiate themselves by their context. Inspired by classical
modeling-and-identification approaches, Meta-SysId learns to represent the
common law through shared parameters and relies on online optimization to
compute system-specific context. Compared to optimization-based meta-learning
methods, the separation between class parameters and context variables reduces
the computational burden while allowing batch computations and a simple
training scheme. We test Meta-SysId on polynomial regression, time-series
prediction, model-based control, and real-world traffic prediction domains,
empirically finding it outperforms or is competitive with meta-learning
baselines.
Related papers
- ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning [49.447777286862994]
ConML is a universal meta-learning framework that can be applied to various meta-learning algorithms.
We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms.
arXiv Detail & Related papers (2024-10-08T12:22:10Z) - MetaTra: Meta-Learning for Generalized Trajectory Prediction in Unseen
Domain [18.8641856367611]
Trajectory prediction has garnered widespread attention in different fields, such as autonomous driving and robotic navigation.
We propose a novel meta-learning-based trajectory prediction method called MetaTra.
We show that MetaTra not only surpasses other state-of-the-art methods but also exhibits plug-and-play capabilities.
arXiv Detail & Related papers (2024-02-13T05:25:37Z) - Meta-Value Learning: a General Framework for Learning with Learning
Awareness [1.4323566945483497]
We propose to judge joint policies by their long-term prospects as measured by the meta-value.
We apply a form of Q-learning to the meta-game of optimization, in a way that avoids the need to explicitly represent the continuous action space of policy updates.
arXiv Detail & Related papers (2023-07-17T21:40:57Z) - Improving Meta-learning for Low-resource Text Classification and
Generation via Memory Imitation [87.98063273826702]
We propose a memory imitation meta-learning (MemIML) method that enhances the model's reliance on support sets for task adaptation.
A theoretical analysis is provided to prove the effectiveness of our method.
arXiv Detail & Related papers (2022-03-22T12:41:55Z) - Conceptually Diverse Base Model Selection for Meta-Learners in Concept
Drifting Data Streams [3.0938904602244355]
We present a novel approach for estimating the conceptual similarity of base models, which is calculated using the Principal Angles (PAs) between their underlying subspaces.
We evaluate these methods against thresholding using common ensemble pruning metrics, namely predictive performance and Mutual Information (MI) in the context of online Transfer Learning (TL)
Our results show that conceptual similarity thresholding has a reduced computational overhead, and yet yields comparable predictive performance to thresholding using predictive performance and MI.
arXiv Detail & Related papers (2021-11-29T13:18:53Z) - Meta-Learning with Neural Tangent Kernels [58.06951624702086]
We propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK)
Within this paradigm, we introduce two meta-learning algorithms, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework.
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
arXiv Detail & Related papers (2021-02-07T20:53:23Z) - Learning to Generalize Unseen Domains via Memory-based Multi-Source
Meta-Learning for Person Re-Identification [59.326456778057384]
We propose the Memory-based Multi-Source Meta-Learning framework to train a generalizable model for unseen domains.
We also present a meta batch normalization layer (MetaBN) to diversify meta-test features.
Experiments demonstrate that our M$3$L can effectively enhance the generalization ability of the model for unseen domains.
arXiv Detail & Related papers (2020-12-01T11:38:16Z) - Modeling and Optimization Trade-off in Meta-learning [23.381986209234164]
We introduce and rigorously define the trade-off between accurate modeling and ease in meta-learning.
Taking MAML as a representative metalearning algorithm, we theoretically characterize the trade-off for general non risk functions as well as linear regression.
We also empirically solve a trade-off for metareinforcement learning benchmarks.
arXiv Detail & Related papers (2020-10-24T15:32:08Z) - Structured Prediction for Conditional Meta-Learning [44.30857707980074]
We propose a new perspective on conditional meta-learning via structured prediction.
We derive task-adaptive structured meta-learning (TASML), a principled framework that yields task-specific objective functions.
Empirically, we show that TASML improves the performance of existing meta-learning models, and outperforms the state-of-the-art on benchmark datasets.
arXiv Detail & Related papers (2020-02-20T15:24:15Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z) - Meta-learning framework with applications to zero-shot time-series
forecasting [82.61728230984099]
This work provides positive evidence using a broad meta-learning framework.
residual connections act as a meta-learning adaptation mechanism.
We show that it is viable to train a neural network on a source TS dataset and deploy it on a different target TS dataset without retraining.
arXiv Detail & Related papers (2020-02-07T16:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.