Multimodality in Meta-Learning: A Comprehensive Survey
- URL: http://arxiv.org/abs/2109.13576v1
- Date: Tue, 28 Sep 2021 09:16:12 GMT
- Title: Multimodality in Meta-Learning: A Comprehensive Survey
- Authors: Yao Ma, Shilin Zhao, Weixiao Wang, Yaoman Li, Irwin King
- Abstract summary: This survey provides a comprehensive overview of the multimodality-based meta-learning landscape.
We first formalize the definition of meta-learning and multimodality, along with the research challenges in this growing field.
We then propose a new taxonomy to systematically discuss typical meta-learning algorithms combined with multimodal tasks.
- Score: 34.69292359136745
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-learning has gained wide popularity as a training framework that is more
data-efficient than traditional machine learning methods. However, its
generalization ability in complex task distributions, such as multimodal tasks,
has not been thoroughly studied. Recently, some studies on multimodality-based
meta-learning have emerged. This survey provides a comprehensive overview of
the multimodality-based meta-learning landscape in terms of the methodologies
and applications. We first formalize the definition of meta-learning and
multimodality, along with the research challenges in this growing field, such
as how to enrich the input in few-shot or zero-shot scenarios and how to
generalize the models to new tasks. We then propose a new taxonomy to
systematically discuss typical meta-learning algorithms combined with
multimodal tasks. We investigate the contributions of related papers and
summarize them by our taxonomy. Finally, we propose potential research
directions for this promising field.
Related papers
- Few-shot Multi-Task Learning of Linear Invariant Features with Meta Subspace Pursuit [9.421309916099428]
We propose a new algorithm called Meta Subspace Pursuit (abbreviated as Meta-SP)
Under this assumption, we propose a new algorithm, called Meta Subspace Pursuit (abbreviated as Meta-SP)
arXiv Detail & Related papers (2024-09-04T13:44:22Z) - Generative Multi-Modal Knowledge Retrieval with Large Language Models [75.70313858231833]
We propose an innovative end-to-end generative framework for multi-modal knowledge retrieval.
Our framework takes advantage of the fact that large language models (LLMs) can effectively serve as virtual knowledge bases.
We demonstrate significant improvements ranging from 3.0% to 14.6% across all evaluation metrics when compared to strong baselines.
arXiv Detail & Related papers (2024-01-16T08:44:29Z) - Meta-learning in healthcare: A survey [3.245586096021802]
Meta-learning aims at improving the model's capabilities by employing prior knowledge and experience.
We first describe the theoretical foundations and pivotal methods of meta-learning.
We then divide the employed meta-learning approaches in the healthcare domain into two main categories of multi/single-task learning and many/few-shot learning.
arXiv Detail & Related papers (2023-08-05T13:11:35Z) - Meta Omnium: A Benchmark for General-Purpose Learning-to-Learn [15.0841751679151]
We introduce Meta Omnium, a dataset-of-datasets spanning multiple vision tasks.
We analyze their ability to generalize across tasks and to transfer knowledge between them.
arXiv Detail & Related papers (2023-05-12T17:25:19Z) - Multimodality Representation Learning: A Survey on Evolution,
Pretraining and Its Applications [47.501121601856795]
Multimodality Representation Learning is a technique of learning to embed information from different modalities and their correlations.
Cross-modal interaction and complementary information from different modalities are crucial for advanced models to perform any multimodal task.
This survey presents the literature on the evolution and enhancement of deep learning multimodal architectures.
arXiv Detail & Related papers (2023-02-01T11:48:34Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Meta-Learning in Neural Networks: A Survey [4.588028371034406]
This survey describes the contemporary meta-learning landscape.
We first discuss definitions of meta-learning and position it with respect to related fields.
We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods.
arXiv Detail & Related papers (2020-04-11T16:34:24Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.