Anti-Transfer Learning for Task Invariance in Convolutional Neural
Networks for Speech Processing
- URL: http://arxiv.org/abs/2006.06494v2
- Date: Wed, 13 Jan 2021 11:15:35 GMT
- Title: Anti-Transfer Learning for Task Invariance in Convolutional Neural
Networks for Speech Processing
- Authors: Eric Guizzo, Tillman Weyde, Giacomo Tarroni
- Abstract summary: We introduce the novel concept of anti-transfer learning for speech processing with convolutional neural networks.
We show that anti-transfer actually leads to the intended invariance to the task and to more appropriate features for the target task at hand.
- Score: 6.376852004129252
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We introduce the novel concept of anti-transfer learning for speech
processing with convolutional neural networks. While transfer learning assumes
that the learning process for a target task will benefit from re-using
representations learned for another task, anti-transfer avoids the learning of
representations that have been learned for an orthogonal task, i.e., one that
is not relevant and potentially misleading for the target task, such as speaker
identity for speech recognition or speech content for emotion recognition. In
anti-transfer learning, we penalize similarity between activations of a network
being trained and another one previously trained on an orthogonal task, which
yields more suitable representations. This leads to better generalization and
provides a degree of control over correlations that are spurious or
undesirable, e.g. to avoid social bias. We have implemented anti-transfer for
convolutional neural networks in different configurations with several
similarity metrics and aggregation functions, which we evaluate and analyze
with several speech and audio tasks and settings, using six datasets. We show
that anti-transfer actually leads to the intended invariance to the orthogonal
task and to more appropriate features for the target task at hand.
Anti-transfer learning consistently improves classification accuracy in all
test cases. While anti-transfer creates computation and memory cost at training
time, there is relatively little computation cost when using pre-trained models
for orthogonal tasks. Anti-transfer is widely applicable and particularly
useful where a specific invariance is desirable or where trained models are
available and labeled data for orthogonal tasks are difficult to obtain.
Related papers
- An Exploration of Data Efficiency in Intra-Dataset Task Transfer for
Dialog Understanding [65.75873687351553]
This study explores the effects of varying quantities of target task training data on sequential transfer learning in the dialog domain.
Unintuitively, our data shows that often target task training data size has minimal effect on how sequential transfer learning performs compared to the same model without transfer learning.
arXiv Detail & Related papers (2022-10-21T04:36:46Z) - Transfer Learning via Test-Time Neural Networks Aggregation [11.42582922543676]
It has been demonstrated that deep neural networks outperform traditional machine learning.
Deep networks lack generalisability, that is, they will not perform as good as in a new (testing) set drawn from a different distribution.
arXiv Detail & Related papers (2022-06-27T15:46:05Z) - Identifying Suitable Tasks for Inductive Transfer Through the Analysis
of Feature Attributions [78.55044112903148]
We use explainability techniques to predict whether task pairs will be complementary, through comparison of neural network activation between single-task models.
Our results show that, through this approach, it is possible to reduce training time by up to 83.5% at a cost of only 0.034 reduction in positive-class F1 on the TREC-IS 2020-A dataset.
arXiv Detail & Related papers (2022-02-02T15:51:07Z) - Learning Curves for Sequential Training of Neural Networks:
Self-Knowledge Transfer and Forgetting [9.734033555407406]
We consider neural networks in the neural tangent kernel regime that continually learn target functions from task to task.
We investigate a variant of continual learning where the model learns the same target function in multiple tasks.
Even for the same target, the trained model shows some transfer and forgetting depending on the sample size of each task.
arXiv Detail & Related papers (2021-12-03T00:25:01Z) - A Bayesian Approach to (Online) Transfer Learning: Theory and Algorithms [6.193838300896449]
We study transfer learning from a Bayesian perspective, where a parametric statistical model is used.
Specifically, we study three variants of transfer learning problems, instantaneous, online, and time-variant transfer learning.
For each problem, we define an appropriate objective function, and provide either exact expressions or upper bounds on the learning performance.
Examples show that the derived bounds are accurate even for small sample sizes.
arXiv Detail & Related papers (2021-09-03T08:43:29Z) - Frustratingly Easy Transferability Estimation [64.42879325144439]
We propose a simple, efficient, and effective transferability measure named TransRate.
TransRate measures the transferability as the mutual information between the features of target examples extracted by a pre-trained model and labels of them.
Despite its extraordinary simplicity in 10 lines of codes, TransRate performs remarkably well in extensive evaluations on 22 pre-trained models and 16 downstream tasks.
arXiv Detail & Related papers (2021-06-17T10:27:52Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Inter- and Intra-domain Knowledge Transfer for Related Tasks in Deep
Character Recognition [2.320417845168326]
Pre-training a deep neural network on the ImageNet dataset is a common practice for training deep learning models.
The technique of pre-training on one task and then retraining on a new one is called transfer learning.
In this paper we analyse the effectiveness of using deep transfer learning for character recognition tasks.
arXiv Detail & Related papers (2020-01-02T14:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.