Towards explainable meta-learning
- URL: http://arxiv.org/abs/2002.04276v2
- Date: Mon, 12 Jul 2021 12:23:23 GMT
- Title: Towards explainable meta-learning
- Authors: Katarzyna Wo\'znica and Przemys{\l}aw Biecek
- Abstract summary: Meta-learning aims at discovering how different machine learning algorithms perform on a wide range of predictive tasks.
State of the art approaches are focused on searching for the best meta-model but do not explain how these different aspects contribute to its performance.
We propose techniques developed for eXplainable Artificial Intelligence (XAI) to examine and extract knowledge from black-box surrogate models.
- Score: 5.802346990263708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning is a field that aims at discovering how different machine
learning algorithms perform on a wide range of predictive tasks. Such knowledge
speeds up the hyperparameter tuning or feature engineering. With the use of
surrogate models various aspects of the predictive task such as meta-features,
landmarker models e.t.c. are used to predict the expected performance. State of
the art approaches are focused on searching for the best meta-model but do not
explain how these different aspects contribute to its performance. However, to
build a new generation of meta-models we need a deeper understanding of the
importance and effect of meta-features on the model tunability. In this paper,
we propose techniques developed for eXplainable Artificial Intelligence (XAI)
to examine and extract knowledge from black-box surrogate models. To our
knowledge, this is the first paper that shows how post-hoc explainability can
be used to improve the meta-learning.
Related papers
- Learn To Learn More Precisely [30.825058308218047]
"Learn to learn more precisely" aims to make the model learn precise target knowledge from data.
We propose a simple and effective meta-learning framework named Meta Self-Distillation(MSD) to maximize the consistency of learned knowledge.
MSD exhibits remarkable performance in few-shot classification tasks in both standard and augmented scenarios.
arXiv Detail & Related papers (2024-08-08T17:01:26Z) - Meta-Learning with Self-Improving Momentum Target [72.98879709228981]
We propose Self-improving Momentum Target (SiMT) to improve the performance of a meta-learner.
SiMT generates the target model by adapting from the temporal ensemble of the meta-learner.
We show that SiMT brings a significant performance gain when combined with a wide range of meta-learning methods.
arXiv Detail & Related papers (2022-10-11T06:45:15Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Bootstrapped Meta-Learning [48.017607959109924]
We propose an algorithm that tackles a challenging meta-optimisation problem by letting the meta-learner teach itself.
The algorithm first bootstraps a target from the meta-learner, then optimises the meta-learner by minimising the distance to that target under a chosen (pseudo-)metric.
We achieve a new state-of-the art for model-free agents on the Atari ALE benchmark, improve upon MAML in few-shot learning, and demonstrate how our approach opens up new possibilities.
arXiv Detail & Related papers (2021-09-09T18:29:05Z) - Learning an Explicit Hyperparameter Prediction Function Conditioned on
Tasks [62.63852372239708]
Meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks.
We interpret such learning methodology as learning an explicit hyper- parameter prediction function shared by all training tasks.
Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks.
arXiv Detail & Related papers (2021-07-06T04:05:08Z) - A Metamodel and Framework for Artificial General Intelligence From
Theory to Practice [11.756425327193426]
This paper introduces a new metamodel-based knowledge representation that significantly improves autonomous learning and adaptation.
We have applied the metamodel to problems ranging from time series analysis, computer vision, and natural language understanding.
One surprising consequence of the metamodel is that it not only enables a new level of autonomous learning and optimal functioning for machine intelligences.
arXiv Detail & Related papers (2021-02-11T16:45:58Z) - Learning Abstract Task Representations [0.6690874707758511]
We propose a method to induce new abstract meta-features as latent variables in a deep neural network.
We demonstrate our methodology using a deep neural network as a feature extractor.
arXiv Detail & Related papers (2021-01-19T20:31:02Z) - A Comprehensive Overview and Survey of Recent Advances in Meta-Learning [0.0]
Meta-learning also known as learning-to-learn which seeks rapid and accurate model adaptation to unseen tasks.
We briefly introduce meta-learning methodologies in the following categories: black-box meta-learning, metric-based meta-learning, layered meta-learning and Bayesian meta-learning framework.
arXiv Detail & Related papers (2020-04-17T03:11:08Z) - Unraveling Meta-Learning: Understanding Feature Representations for
Few-Shot Tasks [55.66438591090072]
We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models trained classically.
We develop a regularizer which boosts the performance of standard training routines for few-shot classification.
arXiv Detail & Related papers (2020-02-17T03:18:45Z) - Revisiting Meta-Learning as Supervised Learning [69.2067288158133]
We aim to provide a principled, unifying framework by revisiting and strengthening the connection between meta-learning and traditional supervised learning.
By treating pairs of task-specific data sets and target models as (feature, label) samples, we can reduce many meta-learning algorithms to instances of supervised learning.
This view not only unifies meta-learning into an intuitive and practical framework but also allows us to transfer insights from supervised learning directly to improve meta-learning.
arXiv Detail & Related papers (2020-02-03T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.