MTL2L: A Context Aware Neural Optimiser
- URL: http://arxiv.org/abs/2007.09343v1
- Date: Sat, 18 Jul 2020 06:37:30 GMT
- Title: MTL2L: A Context Aware Neural Optimiser
- Authors: Nicholas I-Hsien Kuo, Mehrtash Harandi, Nicolas Fourrier, Christian
Walder, Gabriela Ferraro, Hanna Suominen
- Abstract summary: Multi-Task Learning to Learn (MTL2L) is a context aware neural optimiser which self-modifies its optimisation rules based on input data.
We show that MTL2L is capable of updating learners to classify on data of an unseen input-domain at the meta-testing phase.
- Score: 25.114351877091785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning to learn (L2L) trains a meta-learner to assist the learning of a
task-specific base learner. Previously, it was shown that a meta-learner could
learn the direct rules to update learner parameters; and that the learnt neural
optimiser updated learners more rapidly than handcrafted gradient-descent
methods. However, we demonstrate that previous neural optimisers were limited
to update learners on one designated dataset. In order to address input-domain
heterogeneity, we introduce Multi-Task Learning to Learn (MTL2L), a context
aware neural optimiser which self-modifies its optimisation rules based on
input data. We show that MTL2L is capable of updating learners to classify on
data of an unseen input-domain at the meta-testing phase.
Related papers
- Learning to Learn without Forgetting using Attention [5.6739565497512405]
Continual learning (CL) refers to the ability to continually learn over time by accommodating new knowledge while retaining previously learned experience.
Current machine learning methods are highly prone to overwrite previously learned patterns and thus forget past experience.
Since hand-crafting effective update mechanisms is difficult, we propose meta-learning a transformer-based to enhance CL.
arXiv Detail & Related papers (2024-08-06T14:25:23Z) - Learning to optimize by multi-gradient for multi-objective optimization [0.0]
We introduce a new automatic learning paradigm for optimizing MOO problems, and propose a multi-gradient learning to optimize (ML2O) method.
As a learning-based method, ML2O acquires knowledge of local landscapes by leveraging information from the current step.
We show that our learned outperforms hand-designed competitors on training multi-task learning (MTL) neural network.
arXiv Detail & Related papers (2023-11-01T14:55:54Z) - Symbolic Learning to Optimize: Towards Interpretability and Scalability [113.23813868412954]
Recent studies on Learning to Optimize (L2O) suggest a promising path to automating and accelerating the optimization procedure for complicated tasks.
Existing L2O models parameterize optimization rules by neural networks, and learn those numerical rules via meta-training.
In this paper, we establish a holistic symbolic representation and analysis framework for L2O.
We propose a lightweight L2O model that can be meta-trained on large-scale problems and outperformed human-designed and tuneds.
arXiv Detail & Related papers (2022-03-13T06:04:25Z) - Can we learn gradients by Hamiltonian Neural Networks? [68.8204255655161]
We propose a meta-learner based on ODE neural networks that learns gradients.
We demonstrate that our method outperforms a meta-learner based on LSTM for an artificial task and the MNIST dataset with ReLU activations in the optimizee.
arXiv Detail & Related papers (2021-10-31T18:35:10Z) - Accelerating Gradient-based Meta Learner [2.1349209400003932]
We propose various acceleration techniques to speed up meta learning algorithms such as MAML (Model Agnostic Meta Learning)
We introduce a novel method of training tasks in clusters, which not only accelerates the meta learning process but also improves model accuracy performance.
arXiv Detail & Related papers (2021-10-27T14:27:36Z) - Meta-learning the Learning Trends Shared Across Tasks [123.10294801296926]
Gradient-based meta-learning algorithms excel at quick adaptation to new tasks with limited data.
Existing meta-learning approaches only depend on the current task information during the adaptation.
We propose a 'Path-aware' model-agnostic meta-learning approach.
arXiv Detail & Related papers (2020-10-19T08:06:47Z) - Low-Resource Domain Adaptation for Compositional Task-Oriented Semantic
Parsing [85.35582118010608]
Task-oriented semantic parsing is a critical component of virtual assistants.
Recent advances in deep learning have enabled several approaches to successfully parse more complex queries.
We propose a novel method that outperforms a supervised neural model at a 10-fold data reduction.
arXiv Detail & Related papers (2020-10-07T17:47:53Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - Pre-training Text Representations as Meta Learning [113.3361289756749]
We introduce a learning algorithm which directly optimize model's ability to learn text representations for effective learning of downstream tasks.
We show that there is an intrinsic connection between multi-task pre-training and model-agnostic meta-learning with a sequence of meta-train steps.
arXiv Detail & Related papers (2020-04-12T09:05:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.