Towards All-around Knowledge Transferring: Learning From Task-irrelevant
Labels
- URL: http://arxiv.org/abs/2011.08470v2
- Date: Wed, 4 May 2022 10:06:15 GMT
- Title: Towards All-around Knowledge Transferring: Learning From Task-irrelevant
Labels
- Authors: Yinghui Li, Ruiyang Liu, ZiHao Zhang, Ning Ding, Ying Shen, Linmi Tao,
Hai-Tao Zheng
- Abstract summary: Existing efforts mainly focus on transferring task-relevant knowledge from other similar data to tackle the issue.
To date, no large-scale studies have been performed to investigate the impact of task-irrelevant features.
We propose Task-Irrelevant Transfer Learning to exploit taskirrelevant features, which mainly are extracted from task-irrelevant labels.
- Score: 44.036667329736225
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural models have hitherto achieved significant performances on
numerous classification tasks, but meanwhile require sufficient manually
annotated data. Since it is extremely time-consuming and expensive to annotate
adequate data for each classification task, learning an empirically effective
model with generalization on small dataset has received increased attention.
Existing efforts mainly focus on transferring task-relevant knowledge from
other similar data to tackle the issue. These approaches have yielded
remarkable improvements, yet neglecting the fact that the task-irrelevant
features could bring out massive negative transfer effects. To date, no
large-scale studies have been performed to investigate the impact of
task-irrelevant features, let alone the utilization of this kind of features.
In this paper, we firstly propose Task-Irrelevant Transfer Learning (TIRTL) to
exploit task-irrelevant features, which mainly are extracted from
task-irrelevant labels. Particularly, we suppress the expression of
task-irrelevant information and facilitate the learning process of
classification. We also provide a theoretical explanation of our method. In
addition, TIRTL does not conflict with those that have previously exploited
task-relevant knowledge and can be well combined to enable the simultaneous
utilization of task-relevant and task-irrelevant features for the first time.
In order to verify the effectiveness of our theory and method, we conduct
extensive experiments on facial expression recognition and digit recognition
tasks. Our source code will be also available in the future for
reproducibility.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - An Information-Theoretic Approach to Transferability in Task Transfer
Learning [16.05523977032659]
Task transfer learning is a popular technique in image processing applications that uses pre-trained models to reduce the supervision cost of related tasks.
We present a novel metric, H-score, that estimates the performance of transferred representations from one task to another in classification problems.
arXiv Detail & Related papers (2022-12-20T08:47:17Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Cross-Task Knowledge Distillation in Multi-Task Recommendation [41.62428191434233]
Multi-task learning has been widely used in real-world recommenders to predict different types of user feedback.
We propose a Cross-Task Knowledge Distillation framework in recommendation, which consists of three procedures.
arXiv Detail & Related papers (2022-02-20T16:15:19Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - On the importance of cross-task features for class-incremental learning [14.704888854064501]
In class-incremental learning, an agent with limited resources needs to learn a sequence of classification tasks.
The main difference with task-incremental learning, where a task-ID is available at inference time, is that the learner also needs to perform cross-task discrimination.
arXiv Detail & Related papers (2021-06-22T17:03:15Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.