Awesome-META+: Meta-Learning Research and Learning Platform
- URL: http://arxiv.org/abs/2304.12921v1
- Date: Mon, 24 Apr 2023 03:09:25 GMT
- Title: Awesome-META+: Meta-Learning Research and Learning Platform
- Authors: Jingyao Wang, Chuyuan Zhang, Ye Ding, Yuxuan Yang
- Abstract summary: Awesome-META+ is a complete and reliable meta-learning framework application and learning platform.
The project aims to promote the development of meta-learning and the expansion of the community.
- Score: 3.7381507346856524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence technology has already had a profound impact in
various fields such as economy, industry, and education, but still limited.
Meta-learning, also known as "learning to learn", provides an opportunity for
general artificial intelligence, which can break through the current AI
bottleneck. However, meta learning started late and there are fewer projects
compare with CV, NLP etc. Each deployment requires a lot of experience to
configure the environment, debug code or even rewrite, and the frameworks are
isolated. Moreover, there are currently few platforms that focus exclusively on
meta-learning, or provide learning materials for novices, for which the
threshold is relatively high. Based on this, Awesome-META+, a meta-learning
framework integration and learning platform is proposed to solve the above
problems and provide a complete and reliable meta-learning framework
application and learning platform. The project aims to promote the development
of meta-learning and the expansion of the community, including but not limited
to the following functions: 1) Complete and reliable meta-learning framework,
which can adapt to multi-field tasks such as target detection, image
classification, and reinforcement learning. 2) Convenient and simple model
deployment scheme which provide convenient meta-learning transfer methods and
usage methods to lower the threshold of meta-learning and improve efficiency.
3) Comprehensive researches for learning. 4) Objective and credible performance
analysis and thinking.
Related papers
- ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning [49.447777286862994]
ConML is a universal meta-learning framework that can be applied to various meta-learning algorithms.
We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms.
arXiv Detail & Related papers (2024-10-08T12:22:10Z) - Context-Aware Meta-Learning [52.09326317432577]
We propose a meta-learning algorithm that emulates Large Language Models by learning new visual concepts during inference without fine-tuning.
Our approach exceeds or matches the state-of-the-art algorithm, P>M>F, on 8 out of 11 meta-learning benchmarks.
arXiv Detail & Related papers (2023-10-17T03:35:27Z) - Panoramic Learning with A Standardized Machine Learning Formalism [116.34627789412102]
This paper presents a standardized equation of the learning objective, that offers a unifying understanding of diverse ML algorithms.
It also provides guidance for mechanic design of new ML solutions, and serves as a promising vehicle towards panoramic learning with all experiences.
arXiv Detail & Related papers (2021-08-17T17:44:38Z) - A Metamodel and Framework for Artificial General Intelligence From
Theory to Practice [11.756425327193426]
This paper introduces a new metamodel-based knowledge representation that significantly improves autonomous learning and adaptation.
We have applied the metamodel to problems ranging from time series analysis, computer vision, and natural language understanding.
One surprising consequence of the metamodel is that it not only enables a new level of autonomous learning and optimal functioning for machine intelligences.
arXiv Detail & Related papers (2021-02-11T16:45:58Z) - MELD: Meta-Reinforcement Learning from Images via Latent State Models [109.1664295663325]
We develop an algorithm for meta-RL from images that performs inference in a latent state model to quickly acquire new skills.
MELD is the first meta-RL algorithm trained in a real-world robotic control setting from images.
arXiv Detail & Related papers (2020-10-26T23:50:30Z) - Meta-Learning Requires Meta-Augmentation [13.16019567695033]
We describe two forms of metalearning overfitting, and show that they appear experimentally in common benchmarks.
We then use an information-theoretic framework to discuss meta-augmentation, a way to add randomness that discourages the base learner and model from learning trivial solutions.
We demonstrate that meta-augmentation produces large complementary benefits to recently proposed meta-regularization techniques.
arXiv Detail & Related papers (2020-07-10T18:04:04Z) - A Comprehensive Overview and Survey of Recent Advances in Meta-Learning [0.0]
Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
arXiv Detail & Related papers (2020-04-17T03:11:08Z) - Meta-Learning in Neural Networks: A Survey [4.588028371034406]
This survey describes the contemporary meta-learning landscape.
We first discuss definitions of meta-learning and position it with respect to related fields.
We then propose a new taxonomy that provides a more comprehensive breakdown of the space of meta-learning methods.
arXiv Detail & Related papers (2020-04-11T16:34:24Z) - Unraveling Meta-Learning: Understanding Feature Representations for
Few-Shot Tasks [55.66438591090072]
We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models trained classically.
We develop a regularizer which boosts the performance of standard training routines for few-shot classification.
arXiv Detail & Related papers (2020-02-17T03:18:45Z) - Towards explainable meta-learning [5.802346990263708]
Meta-learning aims at discovering how different machine learning algorithms perform on a wide range of predictive tasks.
State of the art approaches are focused on searching for the best meta-model but do not explain how these different aspects contribute to its performance.
We propose techniques developed for eXplainable Artificial Intelligence (XAI) to examine and extract knowledge from black-box surrogate models.
arXiv Detail & Related papers (2020-02-11T09:42:29Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.