Meta Learning in the Continuous Time Limit
- URL: http://arxiv.org/abs/2006.10921v2
- Date: Wed, 8 Jul 2020 01:26:40 GMT
- Title: Meta Learning in the Continuous Time Limit
- Authors: Ruitu Xu, Lin Chen, Amin Karbasi
- Abstract summary: We establish the ordinary differential equation (ODE) that underlies the training dynamics of Model-A Meta-Learning (MAML)
We propose a new BI-MAML training algorithm that significantly reduces the computational burden associated with existing MAML training methods.
- Score: 36.23467808322093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we establish the ordinary differential equation (ODE) that
underlies the training dynamics of Model-Agnostic Meta-Learning (MAML). Our
continuous-time limit view of the process eliminates the influence of the
manually chosen step size of gradient descent and includes the existing
gradient descent training algorithm as a special case that results from a
specific discretization. We show that the MAML ODE enjoys a linear convergence
rate to an approximate stationary point of the MAML loss function for strongly
convex task losses, even when the corresponding MAML loss is non-convex.
Moreover, through the analysis of the MAML ODE, we propose a new BI-MAML
training algorithm that significantly reduces the computational burden
associated with existing MAML training methods. To complement our theoretical
findings, we perform empirical experiments to showcase the superiority of our
proposed methods with respect to the existing work.
Related papers
- Attribute Controlled Fine-tuning for Large Language Models: A Case Study on Detoxification [76.14641982122696]
We propose a constraint learning schema for fine-tuning Large Language Models (LLMs) with attribute control.
We show that our approach leads to an LLM that produces fewer inappropriate responses while achieving competitive performance on benchmarks and a toxicity detection task.
arXiv Detail & Related papers (2024-10-07T23:38:58Z) - CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - On Training Implicit Meta-Learning With Applications to Inductive
Weighing in Consistency Regularization [0.0]
Implicit meta-learning (IML) requires computing $2nd$ order gradients, particularly the Hessian.
Various approximations for the Hessian were proposed but a systematic comparison of their compute cost, stability, generalization of solution found and estimation accuracy were largely overlooked.
We show how training a "Confidence Network" to extract domain specific features can learn to up-weigh useful images and down-weigh out-of-distribution samples.
arXiv Detail & Related papers (2023-10-28T15:50:03Z) - Provable Generalization of Overparameterized Meta-learning Trained with
SGD [62.892930625034374]
We study the generalization of a widely used meta-learning approach, Model-Agnostic Meta-Learning (MAML)
We provide both upper and lower bounds for the excess risk of MAML, which captures how SGD dynamics affect these generalization bounds.
Our theoretical findings are further validated by experiments.
arXiv Detail & Related papers (2022-06-18T07:22:57Z) - Adaptive neighborhood Metric learning [184.95321334661898]
We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
arXiv Detail & Related papers (2022-01-20T17:26:37Z) - MAML is a Noisy Contrastive Learner [72.04430033118426]
Model-agnostic meta-learning (MAML) is one of the most popular and widely-adopted meta-learning algorithms nowadays.
We provide a new perspective to the working mechanism of MAML and discover that: MAML is analogous to a meta-learner using a supervised contrastive objective function.
We propose a simple but effective technique, zeroing trick, to alleviate such interference.
arXiv Detail & Related papers (2021-06-29T12:52:26Z) - B-SMALL: A Bayesian Neural Network approach to Sparse Model-Agnostic
Meta-Learning [2.9189409618561966]
We propose a Bayesian neural network based MAML algorithm, which we refer to as the B-SMALL algorithm.
We demonstrate the performance of B-MAML using classification and regression tasks, and highlight that training a sparsifying BNN using MAML indeed improves the parameter footprint of the model.
arXiv Detail & Related papers (2021-01-01T09:19:48Z) - How Does the Task Landscape Affect MAML Performance? [42.27488241647739]
We show that Model-Agnostic Meta-Learning (MAML) is more difficult to optimize than non-adaptive learning (NAL)
We analytically address this issue in a linear regression setting consisting of a mixture of easy and hard tasks.
We also give numerical and analytical results suggesting that these insights apply to two-layer neural networks.
arXiv Detail & Related papers (2020-10-27T23:54:44Z) - Theoretical Convergence of Multi-Step Model-Agnostic Meta-Learning [63.64636047748605]
We develop a new theoretical framework to provide convergence guarantee for the general multi-step MAML algorithm.
In particular, our results suggest that an inner-stage step needs to be chosen inversely proportional to $N$ of inner-stage steps in order for $N$ MAML to have guaranteed convergence.
arXiv Detail & Related papers (2020-02-18T19:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.