Learning with Limited Samples -- Meta-Learning and Applications to
Communication Systems
- URL: http://arxiv.org/abs/2210.02515v1
- Date: Mon, 3 Oct 2022 17:15:36 GMT
- Title: Learning with Limited Samples -- Meta-Learning and Applications to
Communication Systems
- Authors: Lisha Chen, Sharu Theresa Jose, Ivana Nikoloska, Sangwoo Park, Tianyi
Chen, Osvaldo Simeone
- Abstract summary: Few-shot meta-learning optimize learning algorithms that can efficiently adapt to new tasks quickly.
This review monograph provides an introduction to meta-learning by covering principles, algorithms, theory, and engineering applications.
- Score: 46.760568562468606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has achieved remarkable success in many machine learning tasks
such as image classification, speech recognition, and game playing. However,
these breakthroughs are often difficult to translate into real-world
engineering systems because deep learning models require a massive number of
training samples, which are costly to obtain in practice. To address labeled
data scarcity, few-shot meta-learning optimizes learning algorithms that can
efficiently adapt to new tasks quickly. While meta-learning is gaining
significant interest in the machine learning literature, its working principles
and theoretic fundamentals are not as well understood in the engineering
community.
This review monograph provides an introduction to meta-learning by covering
principles, algorithms, theory, and engineering applications. After introducing
meta-learning in comparison with conventional and joint learning, we describe
the main meta-learning algorithms, as well as a general bilevel optimization
framework for the definition of meta-learning techniques. Then, we summarize
known results on the generalization capabilities of meta-learning from a
statistical learning viewpoint. Applications to communication systems,
including decoding and power allocation, are discussed next, followed by an
introduction to aspects related to the integration of meta-learning with
emerging computing technologies, namely neuromorphic and quantum computing. The
monograph is concluded with an overview of open research challenges.
Related papers
- Informed Meta-Learning [55.2480439325792]
Meta-learning and informed ML stand out as two approaches for incorporating prior knowledge into ML pipelines.
We formalise a hybrid paradigm, informed meta-learning, facilitating the incorporation of priors from unstructured knowledge representations.
We demonstrate the potential benefits of informed meta-learning in improving data efficiency, robustness to observational noise and task distribution shifts.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - When Meta-Learning Meets Online and Continual Learning: A Survey [39.53836535326121]
meta-learning is a data-driven approach to optimize the learning algorithm.
Continual learning and online learning, both of which involve incrementally updating a model with streaming data.
This paper organizes various problem settings using consistent terminology and formal descriptions.
arXiv Detail & Related papers (2023-11-09T09:49:50Z) - Advances and Challenges in Meta-Learning: A Technical Review [7.149235250835041]
Meta-learning empowers learning systems with the ability to acquire knowledge from multiple tasks.
This review emphasizes its importance in real-world applications where data may be scarce or expensive to obtain.
arXiv Detail & Related papers (2023-07-10T17:32:15Z) - Meta Learning for Natural Language Processing: A Survey [88.58260839196019]
Deep learning has been the mainstream technique in natural language processing (NLP) area.
Deep learning requires many labeled data and is less generalizable across domains.
Meta-learning is an arising field in machine learning studying approaches to learn better algorithms.
arXiv Detail & Related papers (2022-05-03T13:58:38Z) - Online Structured Meta-learning [137.48138166279313]
Current online meta-learning algorithms are limited to learn a globally-shared meta-learner.
We propose an online structured meta-learning (OSML) framework to overcome this limitation.
Experiments on three datasets demonstrate the effectiveness and interpretability of our proposed framework.
arXiv Detail & Related papers (2020-10-22T09:10:31Z) - A Comprehensive Overview and Survey of Recent Advances in Meta-Learning [0.0]
Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
arXiv Detail & Related papers (2020-04-17T03:11:08Z) - Meta-Learning in Neural Networks: A Survey [4.588028371034406]
This survey describes the contemporary meta-learning landscape.
We first discuss definitions of meta-learning and position it with respect to related fields.
We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods.
arXiv Detail & Related papers (2020-04-11T16:34:24Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.