Learning to Learn with Indispensable Connections
- URL: http://arxiv.org/abs/2304.02862v1
- Date: Thu, 6 Apr 2023 04:53:13 GMT
- Title: Learning to Learn with Indispensable Connections
- Authors: Sambhavi Tiwari, Manas Gogoi, Shekhar Verma, Krishna Pratap Singh
- Abstract summary: We propose a novel meta-learning method called Meta-LTH that includes indispensible (necessary) connections.
Our method improves the classification accuracy by approximately 2% (20-way 1-shot task setting) for omniglot dataset.
- Score: 6.040904021861969
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-learning aims to solve unseen tasks with few labelled instances.
Nevertheless, despite its effectiveness for quick learning in existing
optimization-based methods, it has several flaws. Inconsequential connections
are frequently seen during meta-training, which results in an
over-parameterized neural network. Because of this, meta-testing observes
unnecessary computations and extra memory overhead. To overcome such flaws. We
propose a novel meta-learning method called Meta-LTH that includes
indispensible (necessary) connections. We applied the lottery ticket hypothesis
technique known as magnitude pruning to generate these crucial connections that
can effectively solve few-shot learning problem. We aim to perform two things:
(a) to find a sub-network capable of more adaptive meta-learning and (b) to
learn new low-level features of unseen tasks and recombine those features with
the already learned features during the meta-test phase. Experimental results
show that our proposed Met-LTH method outperformed existing first-order MAML
algorithm for three different classification datasets. Our method improves the
classification accuracy by approximately 2% (20-way 1-shot task setting) for
omniglot dataset.
Related papers
- MAC: A Meta-Learning Approach for Feature Learning and Recombination [0.0]
In this paper, we invoke the width-depth duality of neural networks, wherein, we increase the width of the network by adding extra computational units (ACU)
Experimental results show that our proposed MAC method outperformed existing ANIL algorithm for non-similar task distribution by approximately 13%.
arXiv Detail & Related papers (2022-09-20T11:09:54Z) - Gradient-Based Meta-Learning Using Uncertainty to Weigh Loss for
Few-Shot Learning [5.691930884128995]
Model-Agnostic Meta-Learning (MAML) is one of the most successful meta-learning techniques for few-shot learning.
New method is proposed for task-specific learner adaptively learn to select parameters that minimize the loss of new tasks.
Method 1 generates weights by comparing meta-loss differences to improve the accuracy when there are few classes.
Method 2 introduces the homoscedastic uncertainty of each task to weigh multiple losses based on the original gradient descent.
arXiv Detail & Related papers (2022-08-17T08:11:51Z) - Generating meta-learning tasks to evolve parametric loss for
classification learning [1.1355370218310157]
In existing meta-learning approaches, learning tasks for training meta-models are usually collected from public datasets.
We propose a meta-learning approach based on randomly generated meta-learning tasks to obtain a parametric loss for classification learning based on big data.
arXiv Detail & Related papers (2021-11-20T13:07:55Z) - Decoupled and Memory-Reinforced Networks: Towards Effective Feature
Learning for One-Step Person Search [65.51181219410763]
One-step methods have been developed to handle pedestrian detection and identification sub-tasks using a single network.
There are two major challenges in the current one-step approaches.
We propose a decoupled and memory-reinforced network (DMRNet) to overcome these problems.
arXiv Detail & Related papers (2021-02-22T06:19:45Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - La-MAML: Look-ahead Meta Learning for Continual Learning [14.405620521842621]
We propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory.
La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.
arXiv Detail & Related papers (2020-07-27T23:07:01Z) - Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning [79.25478727351604]
We explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric.
We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks.
arXiv Detail & Related papers (2020-03-09T20:06:36Z) - Provable Meta-Learning of Linear Representations [114.656572506859]
We provide fast, sample-efficient algorithms to address the dual challenges of learning a common set of features from multiple, related tasks, and transferring this knowledge to new, unseen tasks.
We also provide information-theoretic lower bounds on the sample complexity of learning these linear features.
arXiv Detail & Related papers (2020-02-26T18:21:34Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.