Submodular Meta-Learning
- URL: http://arxiv.org/abs/2007.05852v2
- Date: Sun, 10 Jan 2021 00:39:23 GMT
- Title: Submodular Meta-Learning
- Authors: Arman Adibi, Aryan Mokhtari, Hamed Hassani
- Abstract summary: We introduce a discrete variant of the meta-learning framework to improve performance on future tasks.
Our approach aims at using prior data, i.e., previously visited tasks, to train a proper initial solution set.
We show that our framework leads to a significant reduction in computational complexity in solving the new tasks while incurring a small performance loss.
- Score: 43.15332631500541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a discrete variant of the meta-learning
framework. Meta-learning aims at exploiting prior experience and data to
improve performance on future tasks. By now, there exist numerous formulations
for meta-learning in the continuous domain. Notably, the Model-Agnostic
Meta-Learning (MAML) formulation views each task as a continuous optimization
problem and based on prior data learns a suitable initialization that can be
adapted to new, unseen tasks after a few simple gradient updates. Motivated by
this terminology, we propose a novel meta-learning framework in the discrete
domain where each task is equivalent to maximizing a set function under a
cardinality constraint. Our approach aims at using prior data, i.e., previously
visited tasks, to train a proper initial solution set that can be quickly
adapted to a new task at a relatively low computational cost. This approach
leads to (i) a personalized solution for each individual task, and (ii)
significantly reduced computational cost at test time compared to the case
where the solution is fully optimized once the new task is revealed. The
training procedure is performed by solving a challenging discrete optimization
problem for which we present deterministic and randomized algorithms. In the
case where the tasks are monotone and submodular, we show strong theoretical
guarantees for our proposed methods even though the training objective may not
be submodular. We also demonstrate the effectiveness of our framework on two
real-world problem instances where we observe that our methods lead to a
significant reduction in computational complexity in solving the new tasks
while incurring a small performance loss compared to when the tasks are fully
optimized.
Related papers
- Algorithm Design for Online Meta-Learning with Task Boundary Detection [63.284263611646]
We propose a novel algorithm for task-agnostic online meta-learning in non-stationary environments.
We first propose two simple but effective detection mechanisms of task switches and distribution shift.
We show that a sublinear task-averaged regret can be achieved for our algorithm under mild conditions.
arXiv Detail & Related papers (2023-02-02T04:02:49Z) - Sample-Efficient Multi-Objective Learning via Generalized Policy
Improvement Prioritization [8.836422771217084]
Multi-objective reinforcement learning (MORL) algorithms tackle sequential decision problems where agents may have different preferences.
We introduce a novel algorithm that uses Generalized Policy Improvement (GPI) to define principled, formally-derived prioritization schemes.
We empirically show that our method outperforms state-of-the-art MORL algorithms in challenging multi-objective tasks.
arXiv Detail & Related papers (2023-01-18T20:54:40Z) - Task Weighting in Meta-learning with Trajectory Optimisation [37.32107678838193]
We introduce a new principled and fully-automated task-weighting algorithm for meta-learning methods.
By considering the weights of tasks within the same mini-batch as an action, we cast the task-weighting meta-learning problem to a trajectory optimisation.
We empirically demonstrate that the proposed approach out-performs common hand-engineering weighting methods in two few-shot learning benchmarks.
arXiv Detail & Related papers (2023-01-04T01:36:09Z) - On the Effectiveness of Fine-tuning Versus Meta-reinforcement Learning [71.55412580325743]
We show that multi-task pretraining with fine-tuning on new tasks performs equally as well, or better, than meta-pretraining with meta test-time adaptation.
This is encouraging for future research, as multi-task pretraining tends to be simpler and computationally cheaper than meta-RL.
arXiv Detail & Related papers (2022-06-07T13:24:00Z) - Meta-learning with an Adaptive Task Scheduler [93.63502984214918]
Existing meta-learning algorithms randomly sample meta-training tasks with a uniform probability.
It is likely that tasks are detrimental with noise or imbalanced given a limited number of meta-training tasks.
We propose an adaptive task scheduler (ATS) for the meta-training process.
arXiv Detail & Related papers (2021-10-26T22:16:35Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - Modeling and Optimization Trade-off in Meta-learning [23.381986209234164]
We introduce and rigorously define the trade-off between accurate modeling and ease in meta-learning.
Taking MAML as a representative metalearning algorithm, we theoretically characterize the trade-off for general non risk functions as well as linear regression.
We also empirically solve a trade-off for metareinforcement learning benchmarks.
arXiv Detail & Related papers (2020-10-24T15:32:08Z) - Model-based Adversarial Meta-Reinforcement Learning [38.28304764312512]
We propose Model-based Adversarial Meta-Reinforcement Learning (AdMRL)
AdMRL aims to minimize the worst-case sub-optimality gap across all tasks in a family of tasks.
We evaluate our approach on several continuous control benchmarks and demonstrate its efficacy in the worst-case performance over all tasks.
arXiv Detail & Related papers (2020-06-16T02:21:49Z) - Meta Cyclical Annealing Schedule: A Simple Approach to Avoiding
Meta-Amortization Error [50.83356836818667]
We develop a novel meta-regularization objective using it cyclical annealing schedule and it maximum mean discrepancy (MMD) criterion.
The experimental results show that our approach substantially outperforms standard meta-learning algorithms.
arXiv Detail & Related papers (2020-03-04T04:43:16Z) - Structured Prediction for Conditional Meta-Learning [44.30857707980074]
We propose a new perspective on conditional meta-learning via structured prediction.
We derive task-adaptive structured meta-learning (TASML), a principled framework that yields task-specific objective functions.
Empirically, we show that TASML improves the performance of existing meta-learning models, and outperforms the state-of-the-art on benchmark datasets.
arXiv Detail & Related papers (2020-02-20T15:24:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.