BOML: A Modularized Bilevel Optimization Library in Python for Meta
Learning
- URL: http://arxiv.org/abs/2009.13357v1
- Date: Mon, 28 Sep 2020 14:21:55 GMT
- Title: BOML: A Modularized Bilevel Optimization Library in Python for Meta
Learning
- Authors: Yaohua Liu, Risheng Liu
- Abstract summary: BOML is a modularized optimization library that unifies several meta-learning algorithms into a common bilevel optimization framework.
It provides a hierarchical optimization pipeline together with a variety of iteration modules, which can be used to solve the mainstream categories of meta-learning methods.
- Score: 52.90643948602659
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-learning (a.k.a. learning to learn) has recently emerged as a promising
paradigm for a variety of applications. There are now many meta-learning
methods, each focusing on different modeling aspects of base and meta learners,
but all can be (re)formulated as specific bilevel optimization problems. This
work presents BOML, a modularized optimization library that unifies several
meta-learning algorithms into a common bilevel optimization framework. It
provides a hierarchical optimization pipeline together with a variety of
iteration modules, which can be used to solve the mainstream categories of
meta-learning methods, such as meta-feature-based and meta-initialization-based
formulations. The library is written in Python and is available at
https://github.com/dut-media-lab/BOML.
Related papers
- Fast Adaptation with Kernel and Gradient based Meta Leaning [4.763682200721131]
We propose two algorithms to improve both the inner and outer loops of Model A Meta Learning (MAML)
Our first algorithm redefines the optimization problem in the function space to update the model using closed-form solutions.
In the outer loop, the second algorithm adjusts the learning of the meta-learner by assigning weights to the losses from each task of the inner loop.
arXiv Detail & Related papers (2024-11-01T07:05:03Z) - Learning to Learn from APIs: Black-Box Data-Free Meta-Learning [95.41441357931397]
Data-free meta-learning (DFML) aims to enable efficient learning of new tasks by meta-learning from a collection of pre-trained models without access to the training data.
Existing DFML work can only meta-learn from (i) white-box and (ii) small-scale pre-trained models.
We propose a Bi-level Data-free Meta Knowledge Distillation (BiDf-MKD) framework to transfer more general meta knowledge from a collection of black-box APIs to one single model.
arXiv Detail & Related papers (2023-05-28T18:00:12Z) - Memory-Based Optimization Methods for Model-Agnostic Meta-Learning and
Personalized Federated Learning [56.17603785248675]
Model-agnostic meta-learning (MAML) has become a popular research area.
Existing MAML algorithms rely on the episode' idea by sampling a few tasks and data points to update the meta-model at each iteration.
This paper proposes memory-based algorithms for MAML that converge with vanishing error.
arXiv Detail & Related papers (2021-06-09T08:47:58Z) - MetaDelta: A Meta-Learning System for Few-shot Image Classification [71.06324527247423]
We propose MetaDelta, a novel practical meta-learning system for the few-shot image classification.
Each meta-learner in MetaDelta is composed of a unique pretrained encoder fine-tuned by batch training and parameter-free decoder used for prediction.
arXiv Detail & Related papers (2021-02-22T02:57:22Z) - A Nested Bi-level Optimization Framework for Robust Few Shot Learning [10.147225934340877]
NestedMAML learns to assign weights to training tasks or instances.
Experiments on synthetic and real-world datasets demonstrate that NestedMAML efficiently mitigates the effects of "unwanted" tasks or instances.
arXiv Detail & Related papers (2020-11-13T06:41:22Z) - Fast Few-Shot Classification by Few-Iteration Meta-Learning [173.32497326674775]
We introduce a fast optimization-based meta-learning method for few-shot classification.
Our strategy enables important aspects of the base learner objective to be learned during meta-training.
We perform a comprehensive experimental analysis, demonstrating the speed and effectiveness of our approach.
arXiv Detail & Related papers (2020-10-01T15:59:31Z) - MetaMix: Improved Meta-Learning with Interpolation-based Consistency
Regularization [14.531741503372764]
We propose an approach called MetaMix to regularize backbone models.
It generates virtual feature-target pairs within each episode to regularize the backbone models.
It can be integrated with any of the MAML-based algorithms and learn the decision boundaries generalizing better to new tasks.
arXiv Detail & Related papers (2020-09-29T02:44:13Z) - Incremental Meta-Learning via Indirect Discriminant Alignment [118.61152684795178]
We develop a notion of incremental learning during the meta-training phase of meta-learning.
Our approach performs favorably at test time as compared to training a model with the full meta-training set.
arXiv Detail & Related papers (2020-02-11T01:39:12Z) - pymoo: Multi-objective Optimization in Python [7.8140593450932965]
We have developed pymoo, a multi-objective optimization framework in Python.
We provide a guide to getting started with our framework by demonstrating the implementation of an exemplary constrained multi-objective optimization scenario.
The implementations in our framework are customizable and algorithms can be modified/extended by supplying custom operators.
arXiv Detail & Related papers (2020-01-22T16:04:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.