Learning by Self-Explanation, with Application to Neural Architecture
Search
- URL: http://arxiv.org/abs/2012.12899v2
- Date: Thu, 11 Mar 2021 01:05:48 GMT
- Title: Learning by Self-Explanation, with Application to Neural Architecture
Search
- Authors: Ramtin Hosseini, Pengtao Xie
- Abstract summary: We propose a novel machine learning method called learning by self-explanation (LeaSE)
In our approach, an explainer model improves its learning ability by trying to clearly explain to an audience model regarding how a prediction outcome is made.
We apply LeaSE for neural architecture search on CIFAR-100, CIFAR-10, and ImageNet.
- Score: 12.317568257671427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning by self-explanation is an effective learning technique in human
learning, where students explain a learned topic to themselves for deepening
their understanding of this topic. It is interesting to investigate whether
this explanation-driven learning methodology broadly used by humans is helpful
for improving machine learning as well. Based on this inspiration, we propose a
novel machine learning method called learning by self-explanation (LeaSE). In
our approach, an explainer model improves its learning ability by trying to
clearly explain to an audience model regarding how a prediction outcome is
made. LeaSE is formulated as a four-level optimization problem involving a
sequence of four learning stages which are conducted end-to-end in a unified
framework: 1) explainer learns; 2) explainer explains; 3) audience learns; 4)
explainer re-learns based on the performance of the audience. We develop an
efficient algorithm to solve the LeaSE problem. We apply LeaSE for neural
architecture search on CIFAR-100, CIFAR-10, and ImageNet. Experimental results
strongly demonstrate the effectiveness of our method.
Related papers
- RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - Exploring Effective Factors for Improving Visual In-Context Learning [56.14208975380607]
In-Context Learning (ICL) is to understand a new task via a few demonstrations (aka. prompt) and predict new inputs without tuning the models.
This paper shows that prompt selection and prompt fusion are two major factors that have a direct impact on the inference performance of visual context learning.
We propose a simple framework prompt-SelF for visual in-context learning.
arXiv Detail & Related papers (2023-04-10T17:59:04Z) - Teaching Algorithmic Reasoning via In-context Learning [45.45116247046013]
We show that it is possible to teach algorithmic reasoning to large language models (LLMs) via in-context learning.
We evaluate our approach on a variety of arithmetic and quantitative reasoning tasks.
We achieve an error reduction of approximately 10x, 9x, 5x and 2x respectively compared to the best available baselines.
arXiv Detail & Related papers (2022-11-15T06:12:28Z) - Implicit Offline Reinforcement Learning via Supervised Learning [83.8241505499762]
Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset collected by policies of different expertise levels.
We show how implicit models can leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets.
arXiv Detail & Related papers (2022-10-21T21:59:42Z) - Learning from Mistakes -- A Framework for Neural Architecture Search [13.722450738258015]
We propose a novel machine learning method called Learning From Mistakes (LFM)
LFM improves the learner's ability to learn by focusing more on the mistakes during revision.
We apply the LFM framework to neural architecture search on CIFAR-10, CIFAR-100, and Imagenet.
arXiv Detail & Related papers (2021-11-11T18:04:07Z) - Interleaving Learning, with Application to Neural Architecture Search [12.317568257671427]
We propose a novel machine learning framework referred to as interleaving learning (IL)
In our framework, a set of models collaboratively learn a data encoder in an interleaving fashion.
We apply interleaving learning to search neural architectures for image classification on CIFAR-10, CIFAR-100, and ImageNet.
arXiv Detail & Related papers (2021-03-12T00:54:22Z) - Learning by Teaching, with Application to Neural Architecture Search [10.426533624387305]
We propose a novel ML framework referred to as learning by teaching (LBT)
In LBT, a teacher model improves itself by teaching a student model to learn well.
Based on how the student performs on a validation dataset, the teacher re-learns its model and re-teaches the student until the student achieves great validation performance.
arXiv Detail & Related papers (2021-03-11T23:50:38Z) - Small-Group Learning, with Application to Neural Architecture Search [17.86826990290058]
In human learning, a small group of students work together towards the same learning objective, where they express their understanding of a topic to their peers, compare their ideas, and help each other to trouble-shoot problems.
In this paper, we aim to investigate whether this human learning method can be borrowed to train better machine learning models, by developing a novel ML framework -- small-group learning (SGL)
SGL is formulated as a multi-level optimization framework consisting of three learning stages: each learner trains a model independently and uses this model to perform pseudo-labeling; each learner trains another model using datasets pseudo-
arXiv Detail & Related papers (2020-12-23T05:56:47Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Provable Representation Learning for Imitation Learning via Bi-level
Optimization [60.059520774789654]
A common strategy in modern learning systems is to learn a representation that is useful for many tasks.
We study this strategy in the imitation learning setting for Markov decision processes (MDPs) where multiple experts' trajectories are available.
We instantiate this framework for the imitation learning settings of behavior cloning and observation-alone.
arXiv Detail & Related papers (2020-02-24T21:03:52Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.