Transfer Learning in Conversational Analysis through Reusing
Preprocessing Data as Supervisors
- URL: http://arxiv.org/abs/2112.03032v1
- Date: Thu, 2 Dec 2021 08:40:42 GMT
- Title: Transfer Learning in Conversational Analysis through Reusing
Preprocessing Data as Supervisors
- Authors: Joshua Yee Kim, Tongliang Liu, Kalina Yacef
- Abstract summary: Using noisy labels in single-task learning increases the risk of over-fitting.
Auxiliary tasks could improve the performance of the primary task learning during the same training.
- Score: 52.37504333689262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conversational analysis systems are trained using noisy human labels and
often require heavy preprocessing during multi-modal feature extraction. Using
noisy labels in single-task learning increases the risk of over-fitting.
Auxiliary tasks could improve the performance of the primary task learning
during the same training -- this approach sits in the intersection of transfer
learning and multi-task learning (MTL). In this paper, we explore how the
preprocessed data used for feature engineering can be re-used as auxiliary
tasks, thereby promoting the productive use of data. Our main contributions
are: (1) the identification of sixteen beneficially auxiliary tasks, (2)
studying the method of distributing learning capacity between the primary and
auxiliary tasks, and (3) studying the relative supervision hierarchy between
the primary and auxiliary tasks. Extensive experiments on IEMOCAP and SEMAINE
data validate the improvements over single-task approaches, and suggest that it
may generalize across multiple primary tasks.
Related papers
- Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big
Task [24.618094251341958]
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.
We propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks.
Our experimental results demonstrate its effectiveness in comparison with the state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-07T02:46:47Z) - Learning Multiple Dense Prediction Tasks from Partially Annotated Data [41.821234589075445]
We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
arXiv Detail & Related papers (2021-11-29T19:03:12Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Measuring and Harnessing Transference in Multi-Task Learning [58.48659733262734]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
arXiv Detail & Related papers (2020-10-29T08:25:43Z) - Auxiliary Task Reweighting for Minimum-data Learning [118.69683270159108]
Supervised learning requires a large amount of training data, limiting its application where labeled data is scarce.
To compensate for data scarcity, one possible method is to utilize auxiliary tasks to provide additional supervision for the main task.
We propose a method to automatically reweight auxiliary tasks in order to reduce the data requirement on the main task.
arXiv Detail & Related papers (2020-10-16T08:45:37Z) - A Brief Review of Deep Multi-task Learning and Auxiliary Task Learning [0.0]
Multi-task learning (MTL) optimize several learning tasks simultaneously.
Auxiliary tasks can be added to the main task to boost the performance.
arXiv Detail & Related papers (2020-07-02T14:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.