More Flexible PAC-Bayesian Meta-Learning by Learning Learning Algorithms
- URL: http://arxiv.org/abs/2402.04054v2
- Date: Wed, 29 May 2024 16:32:36 GMT
- Title: More Flexible PAC-Bayesian Meta-Learning by Learning Learning Algorithms
- Authors: Hossein Zakerinia, Amin Behjati, Christoph H. Lampert,
- Abstract summary: We introduce a new framework for studying meta-learning methods using PAC-Bayesian theory.
Our main advantage is that it allows for more flexibility in how the transfer of knowledge between tasks is realized.
- Score: 15.621144215664769
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a new framework for studying meta-learning methods using PAC-Bayesian theory. Its main advantage over previous work is that it allows for more flexibility in how the transfer of knowledge between tasks is realized. For previous approaches, this could only happen indirectly, by means of learning prior distributions over models. In contrast, the new generalization bounds that we prove express the process of meta-learning much more directly as learning the learning algorithm that should be used for future tasks. The flexibility of our framework makes it suitable to analyze a wide range of meta-learning mechanisms and even design new mechanisms. Other than our theoretical contributions we also show empirically that our framework improves the prediction quality in practical meta-learning mechanisms.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Towards a General Framework for Continual Learning with Pre-training [55.88910947643436]
We present a general framework for continual learning of sequentially arrived tasks with the use of pre-training.
We decompose its objective into three hierarchical components, including within-task prediction, task-identity inference, and task-adaptive prediction.
We propose an innovative approach to explicitly optimize these components with parameter-efficient fine-tuning (PEFT) techniques and representation statistics.
arXiv Detail & Related papers (2023-10-21T02:03:38Z) - Meta-Learned Models of Cognition [11.488249464936422]
Meta-learning is a framework for learning algorithms through repeated interactions with an environment.
This article aims to establish a coherent research program around meta-learned models of cognition.
arXiv Detail & Related papers (2023-04-12T16:30:51Z) - Scalable PAC-Bayesian Meta-Learning via the PAC-Optimal Hyper-Posterior:
From Theory to Practice [54.03076395748459]
A central question in the meta-learning literature is how to regularize to ensure generalization to unseen tasks.
We present a generalization bound for meta-learning, which was first derived by Rothfuss et al.
We provide a theoretical analysis and empirical case study under which conditions and to what extent these guarantees for meta-learning improve upon PAC-Bayesian per-task learning bounds.
arXiv Detail & Related papers (2022-11-14T08:51:04Z) - Learning with Limited Samples -- Meta-Learning and Applications to
Communication Systems [46.760568562468606]
Few-shot meta-learning optimize learning algorithms that can efficiently adapt to new tasks quickly.
This review monograph provides an introduction to meta-learning by covering principles, algorithms, theory, and engineering applications.
arXiv Detail & Related papers (2022-10-03T17:15:36Z) - On Data Efficiency of Meta-learning [17.739215706060605]
We study the often overlooked aspect of the modern meta-learning algorithms -- their data efficiency.
We introduce a new simple framework for evaluating meta-learning methods under a limit on the available supervision.
We propose active meta-learning, which incorporates active data selection into learning-to-learn, leading to better performance of all methods in the limited supervision regime.
arXiv Detail & Related papers (2021-01-30T01:44:12Z) - Improving Few-Shot Learning through Multi-task Representation Learning
Theory [14.8429503385929]
We consider the framework of multi-task representation (MTR) learning where the goal is to use source tasks to learn a representation that reduces the sample complexity of solving a target task.
We show that recent advances in MTR theory can provide novel insights for popular meta-learning algorithms when analyzed within this framework.
This is the first contribution that puts the most recent learning bounds of MTR theory into practice for the task of few-shot classification.
arXiv Detail & Related papers (2020-10-05T13:24:43Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.