Awesome-META+: Meta-Learning Research and Learning Platform
- URL: http://arxiv.org/abs/2304.12921v1
- Date: Mon, 24 Apr 2023 03:09:25 GMT
- Title: Awesome-META+: Meta-Learning Research and Learning Platform
- Authors: Jingyao Wang, Chuyuan Zhang, Ye Ding, Yuxuan Yang
- Abstract summary: Awesome-META+ is a complete and reliable meta-learning framework application and learning platform.
The project aims to promote the development of meta-learning and the expansion of the community.
- Score: 3.7381507346856524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence technology has already had a profound impact in
various fields such as economy, industry, and education, but still limited.
Meta-learning, also known as "learning to learn", provides an opportunity for
general artificial intelligence, which can break through the current AI
bottleneck. However, meta learning started late and there are fewer projects
compare with CV, NLP etc. Each deployment requires a lot of experience to
configure the environment, debug code or even rewrite, and the frameworks are
isolated. Moreover, there are currently few platforms that focus exclusively on
meta-learning, or provide learning materials for novices, for which the
threshold is relatively high. Based on this, Awesome-META+, a meta-learning
framework integration and learning platform is proposed to solve the above
problems and provide a complete and reliable meta-learning framework
application and learning platform. The project aims to promote the development
of meta-learning and the expansion of the community, including but not limited
to the following functions: 1) Complete and reliable meta-learning framework,
which can adapt to multi-field tasks such as target detection, image
classification, and reinforcement learning. 2) Convenient and simple model
deployment scheme which provide convenient meta-learning transfer methods and
usage methods to lower the threshold of meta-learning and improve efficiency.
3) Comprehensive researches for learning. 4) Objective and credible performance
analysis and thinking.
Related papers
- ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning [49.447777286862994]
ConML is a universal meta-learning framework that can be applied to various meta-learning algorithms.
We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms.
arXiv Detail & Related papers (2024-10-08T12:22:10Z) - Advances and Challenges in Meta-Learning: A Technical Review [7.149235250835041]
Meta-learning empowers learning systems with the ability to acquire knowledge from multiple tasks.
This review emphasizes its importance in real-world applications where data may be scarce or expensive to obtain.
arXiv Detail & Related papers (2023-07-10T17:32:15Z) - Concept Discovery for Fast Adapatation [42.81705659613234]
We introduce concept discovery to the few-shot learning problem, where we achieve more effective adaptation by meta-learning the structure among the data features.
Our proposed method Concept-Based Model-Agnostic Meta-Learning (COMAML) has been shown to achieve consistent improvements in the structured data for both synthesized datasets and real-world datasets.
arXiv Detail & Related papers (2023-01-19T02:33:58Z) - Learning with Limited Samples -- Meta-Learning and Applications to
Communication Systems [46.760568562468606]
Few-shot meta-learning optimize learning algorithms that can efficiently adapt to new tasks quickly.
This review monograph provides an introduction to meta-learning by covering principles, algorithms, theory, and engineering applications.
arXiv Detail & Related papers (2022-10-03T17:15:36Z) - On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning [71.55412580325743]
We show that multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation.
This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL.
arXiv Detail & Related papers (2022-06-07T13:24:00Z) - Online Structured Meta-learning [137.48138166279313]
Current online meta-learning algorithms are limited to learn a globally-shared meta-learner.
We propose an online structured meta-learning (OSML) framework to overcome this limitation.
Experiments on three datasets demonstrate the effectiveness and interpretability of our proposed framework.
arXiv Detail & Related papers (2020-10-22T09:10:31Z) - A Comprehensive Overview and Survey of Recent Advances in Meta-Learning [0.0]
Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
arXiv Detail & Related papers (2020-04-17T03:11:08Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z) - Automated Relational Meta-learning [95.02216511235191]
We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
arXiv Detail & Related papers (2020-01-03T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.