A Channel Coding Benchmark for Meta-Learning
- URL: http://arxiv.org/abs/2107.07579v1
- Date: Thu, 15 Jul 2021 19:37:43 GMT
- Title: A Channel Coding Benchmark for Meta-Learning
- Authors: Rui Li, Ondrej Bohdal, Rajesh Mishra, Hyeji Kim, Da Li, Nicholas Lane,
Timothy Hospedales
- Abstract summary: Several important issues in meta-learning have proven hard to study thus far.
We propose the channel coding problem as a benchmark for meta-learning.
Going forward, this benchmark provides a tool for the community to study the capabilities and limitations of meta-learning.
- Score: 21.2424398453955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning provides a popular and effective family of methods for
data-efficient learning of new tasks. However, several important issues in
meta-learning have proven hard to study thus far. For example, performance
degrades in real-world settings where meta-learners must learn from a wide and
potentially multi-modal distribution of training tasks; and when distribution
shift exists between meta-train and meta-test task distributions. These issues
are typically hard to study since the shape of task distributions, and shift
between them are not straightforward to measure or control in standard
benchmarks. We propose the channel coding problem as a benchmark for
meta-learning. Channel coding is an important practical application where task
distributions naturally arise, and fast adaptation to new tasks is practically
valuable. We use this benchmark to study several aspects of meta-learning,
including the impact of task distribution breadth and shift, which can be
controlled in the coding problem. Going forward, this benchmark provides a tool
for the community to study the capabilities and limitations of meta-learning,
and to drive research on practically robust and effective meta-learners.
Related papers
- Meta-Learning with Heterogeneous Tasks [42.695853959923625]
Heterogeneous Tasks Robust Meta-learning (HeTRoM)
An efficient iterative optimization algorithm based on bi-level optimization.
Results demonstrate that our method provides flexibility, enabling users to adapt to diverse task settings.
arXiv Detail & Related papers (2024-10-24T16:32:23Z) - MetaModulation: Learning Variational Feature Hierarchies for Few-Shot
Learning with Fewer Tasks [63.016244188951696]
We propose a method for few-shot learning with fewer tasks, which is by metaulation.
We modify parameters at various batch levels to increase the meta-training tasks.
We also introduce learning variational feature hierarchies by incorporating the variationalulation.
arXiv Detail & Related papers (2023-05-17T15:47:47Z) - Algorithm Design for Online Meta-Learning with Task Boundary Detection [63.284263611646]
We propose a novel algorithm for task-agnostic online meta-learning in non-stationary environments.
We first propose two simple but effective detection mechanisms of task switches and distribution shift.
We show that a sublinear task-averaged regret can be achieved for our algorithm under mild conditions.
arXiv Detail & Related papers (2023-02-02T04:02:49Z) - Uncertainty-Aware Meta-Learning for Multimodal Task Distributions [3.7470451129384825]
We present UnLiMiTD (uncertainty-aware meta-learning for multimodal task distributions)
We take a probabilistic perspective and train a parametric, tuneable distribution over tasks on the meta-dataset.
We demonstrate that UnLiMiTD's predictions compare favorably to, and outperform in most cases, the standard baselines.
arXiv Detail & Related papers (2022-10-04T20:02:25Z) - Contrastive Knowledge-Augmented Meta-Learning for Few-Shot
Classification [28.38744876121834]
We introduce CAML (Contrastive Knowledge-Augmented Meta Learning), a novel approach for knowledge-enhanced few-shot learning.
We evaluate the performance of CAML in different few-shot learning scenarios.
arXiv Detail & Related papers (2022-07-25T17:01:29Z) - On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning [71.55412580325743]
We show that multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation.
This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL.
arXiv Detail & Related papers (2022-06-07T13:24:00Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Diverse Distributions of Self-Supervised Tasks for Meta-Learning in NLP [39.457091182683406]
We aim to provide task distributions for meta-learning by considering self-supervised tasks automatically proposed from unlabeled text.
Our analysis shows that all these factors meaningfully alter the task distribution, some inducing significant improvements in downstream few-shot accuracy of the meta-learned models.
arXiv Detail & Related papers (2021-11-02T01:50:09Z) - Is Support Set Diversity Necessary for Meta-Learning? [14.231486872262531]
We propose a modification to traditional meta-learning approaches in which we keep the support sets fixed across tasks, thus reducing task diversity.
Surprisingly, we find that not only does this modification not result in adverse effects, it almost always improves the performance for a variety of datasets and meta-learning methods.
arXiv Detail & Related papers (2020-11-28T02:28:42Z) - Adaptive Task Sampling for Meta-Learning [79.61146834134459]
Key idea of meta-learning for few-shot classification is to mimic the few-shot situations faced at test time.
We propose an adaptive task sampling method to improve the generalization performance.
arXiv Detail & Related papers (2020-07-17T03:15:53Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.