An Information-Theoretic Approach to Transferability in Task Transfer
Learning
- URL: http://arxiv.org/abs/2212.10082v1
- Date: Tue, 20 Dec 2022 08:47:17 GMT
- Title: An Information-Theoretic Approach to Transferability in Task Transfer
Learning
- Authors: Yajie Bao, Yang Li, Shao-Lun Huang, Lin Zhang, Lizhong Zheng, Amir
Zamir, Leonidas Guibas
- Abstract summary: Task transfer learning is a popular technique in image processing applications that uses pre-trained models to reduce the supervision cost of related tasks.
We present a novel metric, H-score, that estimates the performance of transferred representations from one task to another in classification problems.
- Score: 16.05523977032659
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Task transfer learning is a popular technique in image processing
applications that uses pre-trained models to reduce the supervision cost of
related tasks. An important question is to determine task transferability, i.e.
given a common input domain, estimating to what extent representations learned
from a source task can help in learning a target task. Typically,
transferability is either measured experimentally or inferred through task
relatedness, which is often defined without a clear operational meaning. In
this paper, we present a novel metric, H-score, an easily-computable evaluation
function that estimates the performance of transferred representations from one
task to another in classification problems using statistical and information
theoretic principles. Experiments on real image data show that our metric is
not only consistent with the empirical transferability measurement, but also
useful to practitioners in applications such as source model selection and task
transfer curriculum learning.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Understanding the Transferability of Representations via Task-Relatedness [8.425690424016986]
We propose a novel analysis that analyzes the transferability of the representations of pre-trained models to downstream tasks in terms of their relatedness to a given reference task.
Our experiments using state-of-the-art pre-trained models show the effectiveness of task-relatedness in explaining transferability on various vision and language tasks.
arXiv Detail & Related papers (2023-07-03T08:06:22Z) - Transferability Estimation Based On Principal Gradient Expectation [68.97403769157117]
Cross-task transferability is compatible with transferred results while keeping self-consistency.
Existing transferability metrics are estimated on the particular model by conversing source and target tasks.
We propose Principal Gradient Expectation (PGE), a simple yet effective method for assessing transferability across tasks.
arXiv Detail & Related papers (2022-11-29T15:33:02Z) - Provable Benefits of Representational Transfer in Reinforcement Learning [59.712501044999875]
We study the problem of representational transfer in RL, where an agent first pretrains in a number of source tasks to discover a shared representation.
We show that given generative access to source tasks, we can discover a representation, using which subsequent linear RL techniques quickly converge to a near-optimal policy.
arXiv Detail & Related papers (2022-05-29T04:31:29Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - Towards All-around Knowledge Transferring: Learning From Task-irrelevant
Labels [44.036667329736225]
Existing efforts mainly focus on transferring task-relevant knowledge from other similar data to tackle the issue.
To date, no large-scale studies have been performed to investigate the impact of task-irrelevant features.
We propose Task-Irrelevant Transfer Learning to exploit taskirrelevant features, which mainly are extracted from task-irrelevant labels.
arXiv Detail & Related papers (2020-11-17T06:43:58Z) - Uniform Priors for Data-Efficient Transfer [65.086680950871]
We show that features that are most transferable have high uniformity in the embedding space.
We evaluate the regularization on its ability to facilitate adaptation to unseen tasks and data.
arXiv Detail & Related papers (2020-06-30T04:39:36Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.