The Sample Complexity of Meta Sparse Regression
- URL: http://arxiv.org/abs/2002.09587v2
- Date: Sun, 21 Jun 2020 18:35:21 GMT
- Title: The Sample Complexity of Meta Sparse Regression
- Authors: Zhanyu Wang and Jean Honorio
- Abstract summary: This paper addresses the meta-learning problem in sparse linear regression with infinite tasks.
We show that T in O (( k log(p) ) /l ) tasks are sufficient in order to recover the common support of all tasks.
- Score: 38.092179552223364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the meta-learning problem in sparse linear regression
with infinite tasks. We assume that the learner can access several similar
tasks. The goal of the learner is to transfer knowledge from the prior tasks to
a similar but novel task. For p parameters, size of the support set k , and l
samples per task, we show that T \in O (( k log(p) ) /l ) tasks are sufficient
in order to recover the common support of all tasks. With the recovered
support, we can greatly reduce the sample complexity for estimating the
parameter of the novel task, i.e., l \in O (1) with respect to T and p . We
also prove that our rates are minimax optimal. A key difference between
meta-learning and the classical multi-task learning, is that meta-learning
focuses only on the recovery of the parameters of the novel task, while
multi-task learning estimates the parameter of all tasks, which requires l to
grow with T . Instead, our efficient meta-learning estimator allows for l to be
constant with respect to T (i.e., few-shot learning).
Related papers
- Mitigating Task Interference in Multi-Task Learning via Explicit Task
Routing with Non-Learnable Primitives [19.90788777476128]
Multi-task learning (MTL) seeks to learn a single model to accomplish multiple tasks by leveraging shared information among the tasks.
Existing MTL models have been known to suffer from negative interference among tasks.
We propose ETR-NLP to mitigate task interference through a synergistic combination of non-learnable primitives and explicit task routing.
arXiv Detail & Related papers (2023-08-03T22:34:16Z) - Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning [79.07658065326592]
Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
arXiv Detail & Related papers (2023-08-03T13:08:09Z) - Task Difficulty Aware Parameter Allocation & Regularization for Lifelong
Learning [20.177260510548535]
We propose the Allocation & Regularization (PAR), which adaptively select an appropriate strategy for each task from parameter allocation and regularization based on its learning difficulty.
Our method is scalable and significantly reduces the model's redundancy while improving the model's performance.
arXiv Detail & Related papers (2023-04-11T15:38:21Z) - AdaTask: A Task-aware Adaptive Learning Rate Approach to Multi-task
Learning [19.201899503691266]
We measure the task dominance degree of a parameter by the total updates of each task on this parameter.
We propose a Task-wise Adaptive learning rate approach, AdaTask, to separate the emphaccumulative gradients and hence the learning rate of each task.
Experiments on computer vision and recommender system MTL datasets demonstrate that AdaTask significantly improves the performance of dominated tasks.
arXiv Detail & Related papers (2022-11-28T04:24:38Z) - Analysis and Prediction of NLP Models Via Task Embeddings [25.311690222754454]
We propose MetaEval, a collection of $101$ NLP tasks.
We fit a single transformer to all MetaEval tasks jointly while conditioning it on learned embeddings.
The resulting task embeddings enable a novel analysis of the space of tasks.
arXiv Detail & Related papers (2021-12-10T16:23:24Z) - New Tight Relaxations of Rank Minimization for Multi-Task Learning [161.23314844751556]
We propose two novel multi-task learning formulations based on two regularization terms.
We show that our methods can correctly recover the low-rank structure shared across tasks, and outperform related multi-task learning methods.
arXiv Detail & Related papers (2021-12-09T07:29:57Z) - Instance-Level Task Parameters: A Robust Multi-task Weighting Framework [17.639472693362926]
Recent works have shown that deep neural networks benefit from multi-task learning by learning a shared representation across several related tasks.
We let the training process dictate the optimal weighting of tasks for every instance in the dataset.
We conduct extensive experiments on SURREAL and CityScapes datasets, for human shape and pose estimation, depth estimation and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-11T02:35:42Z) - Sample Efficient Linear Meta-Learning by Alternating Minimization [74.40553081646995]
We study a simple alternating minimization method (MLLAM) which alternately learns the low-dimensional subspace and the regressors.
We show that for a constant subspace dimension MLLAM obtains nearly-optimal estimation error, despite requiring only $Omega(log d)$ samples per task.
We propose a novel task subset selection scheme that ensures the same strong statistical guarantee as MLLAM.
arXiv Detail & Related papers (2021-05-18T06:46:48Z) - Variable-Shot Adaptation for Online Meta-Learning [123.47725004094472]
We study the problem of learning new tasks from a small, fixed number of examples, by meta-learning across static data from a set of previous tasks.
We find that meta-learning solves the full task set with fewer overall labels and greater cumulative performance, compared to standard supervised methods.
These results suggest that meta-learning is an important ingredient for building learning systems that continuously learn and improve over a sequence of problems.
arXiv Detail & Related papers (2020-12-14T18:05:24Z) - Meta Reinforcement Learning with Autonomous Inference of Subtask
Dependencies [57.27944046925876]
We propose and address a novel few-shot RL problem, where a task is characterized by a subtask graph.
Instead of directly learning a meta-policy, we develop a Meta-learner with Subtask Graph Inference.
Our experiment results on two grid-world domains and StarCraft II environments show that the proposed method is able to accurately infer the latent task parameter.
arXiv Detail & Related papers (2020-01-01T17:34:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.