Representation Learning Beyond Linear Prediction Functions
- URL: http://arxiv.org/abs/2105.14989v1
- Date: Mon, 31 May 2021 14:21:52 GMT
- Title: Representation Learning Beyond Linear Prediction Functions
- Authors: Ziping Xu and Ambuj Tewari
- Abstract summary: We show that diversity can be achieved when source tasks and the target task use different prediction function spaces beyond linear functions.
For a general function class, we find that eluder dimension gives a lower bound on the number of tasks required for diversity.
- Score: 33.94130046391917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent papers on the theory of representation learning has shown the
importance of a quantity called diversity when generalizing from a set of
source tasks to a target task. Most of these papers assume that the function
mapping shared representations to predictions is linear, for both source and
target tasks. In practice, researchers in deep learning use different numbers
of extra layers following the pretrained model based on the difficulty of the
new task. This motivates us to ask whether diversity can be achieved when
source tasks and the target task use different prediction function spaces
beyond linear functions. We show that diversity holds even if the target task
uses a neural network with multiple layers, as long as source tasks use linear
functions. If source tasks use nonlinear prediction functions, we provide a
negative result by showing that depth-1 neural networks with ReLu activation
function need exponentially many source tasks to achieve diversity. For a
general function class, we find that eluder dimension gives a lower bound on
the number of tasks required for diversity. Our theoretical results imply that
simpler tasks generalize better. Though our theoretical results are shown for
the global minimizer of empirical risks, their qualitative predictions still
hold true for gradient-based optimization algorithms as verified by our
simulations on deep neural networks.
Related papers
- Gradient-based inference of abstract task representations for generalization in neural networks [5.794537047184604]
We show that gradients backpropagated through a neural network to a task representation layer are an efficient way to infer current task demands.
We demonstrate that gradient-based inference provides higher learning efficiency and generalization to novel tasks and limits.
arXiv Detail & Related papers (2024-07-24T15:28:08Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Diffused Redundancy in Pre-trained Representations [98.55546694886819]
We take a closer look at how features are encoded in pre-trained representations.
We find that learned representations in a given layer exhibit a degree of diffuse redundancy.
Our findings shed light on the nature of representations learned by pre-trained deep neural networks.
arXiv Detail & Related papers (2023-05-31T21:00:50Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Identification of Negative Transfers in Multitask Learning Using
Surrogate Models [29.882265735630046]
Multitask learning is widely used to train a low-resource target task by augmenting it with multiple related source tasks.
A critical problem in multitask learning is identifying subsets of source tasks that would benefit the target task.
We introduce an efficient procedure to address this problem via surrogate modeling.
arXiv Detail & Related papers (2023-03-25T23:16:11Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - Deep transfer learning for partial differential equations under
conditional shift with DeepONet [0.0]
We propose a novel TL framework for task-specific learning under conditional shift with a deep operator network (DeepONet)
Inspired by the conditional embedding operator theory, we measure the statistical distance between the source domain and the target feature domain.
We show that the proposed TL framework enables fast and efficient multi-task operator learning, despite significant differences between the source and target domains.
arXiv Detail & Related papers (2022-04-20T23:23:38Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.