Learning Invariant Representations across Domains and Tasks
- URL: http://arxiv.org/abs/2103.05114v1
- Date: Wed, 3 Mar 2021 11:18:43 GMT
- Title: Learning Invariant Representations across Domains and Tasks
- Authors: Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun
Xu, Tao Qin, Tie-Yan Liu
- Abstract summary: We propose a novel Task Adaptation Network (TAN) to solve this unsupervised task transfer problem.
In addition to learning transferable features via domain-adversarial training, we propose a novel task semantic adaptor that uses the learning-to-learn strategy to adapt the task semantics.
TAN significantly increases the recall and F1 score by 5.0% and 7.8% compared to recently strong baselines.
- Score: 81.30046935430791
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Being expensive and time-consuming to collect massive COVID-19 image samples
to train deep classification models, transfer learning is a promising approach
by transferring knowledge from the abundant typical pneumonia datasets for
COVID-19 image classification. However, negative transfer may deteriorate the
performance due to the feature distribution divergence between two datasets and
task semantic difference in diagnosing pneumonia and COVID-19 that rely on
different characteristics. It is even more challenging when the target dataset
has no labels available, i.e., unsupervised task transfer learning. In this
paper, we propose a novel Task Adaptation Network (TAN) to solve this
unsupervised task transfer problem. In addition to learning transferable
features via domain-adversarial training, we propose a novel task semantic
adaptor that uses the learning-to-learn strategy to adapt the task semantics.
Experiments on three public COVID-19 datasets demonstrate that our proposed
method achieves superior performance. Especially on COVID-DA dataset, TAN
significantly increases the recall and F1 score by 5.0% and 7.8% compared to
recently strong baselines. Moreover, we show that TAN also achieves superior
performance on several public domain adaptation benchmarks.
Related papers
- Unsupervised Transfer Learning via Adversarial Contrastive Training [3.227277661633986]
We propose a novel unsupervised transfer learning approach using adversarial contrastive training (ACT)
Our experimental results demonstrate outstanding classification accuracy with both fine-tuned linear probe and K-NN protocol across various datasets.
arXiv Detail & Related papers (2024-08-16T05:11:52Z) - Less is More: High-value Data Selection for Visual Instruction Tuning [127.38740043393527]
We propose a high-value data selection approach TIVE, to eliminate redundancy within the visual instruction data and reduce the training cost.
Our approach using only about 15% data can achieve comparable average performance to the full-data fine-tuned model across eight benchmarks.
arXiv Detail & Related papers (2024-03-14T16:47:25Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - GenCo: An Auxiliary Generator from Contrastive Learning for Enhanced
Few-Shot Learning in Remote Sensing [9.504503675097137]
We introduce a generator-based contrastive learning framework (GenCo) that pre-trains backbones and simultaneously explores variants of feature samples.
In fine-tuning, the auxiliary generator can be used to enrich limited labeled data samples in feature space.
We demonstrate the effectiveness of our method in improving few-shot learning performance on two key remote sensing datasets.
arXiv Detail & Related papers (2023-07-27T03:59:19Z) - MT-SLVR: Multi-Task Self-Supervised Learning for Transformation
In(Variant) Representations [2.94944680995069]
We propose a multi-task self-supervised framework (MT-SLVR) that learns both variant and invariant features in a parameter-efficient manner.
We evaluate our approach on few-shot classification tasks drawn from a variety of audio domains and demonstrate improved classification performance.
arXiv Detail & Related papers (2023-05-29T09:10:50Z) - Beyond Transfer Learning: Co-finetuning for Action Localisation [64.07196901012153]
We propose co-finetuning -- simultaneously training a single model on multiple upstream'' and downstream'' tasks.
We demonstrate that co-finetuning outperforms traditional transfer learning when using the same total amount of data.
We also show how we can easily extend our approach to multiple upstream'' datasets to further improve performance.
arXiv Detail & Related papers (2022-07-08T10:25:47Z) - Efficient Self-supervised Vision Transformers for Representation
Learning [86.57557009109411]
We show that multi-stage architectures with sparse self-attentions can significantly reduce modeling complexity.
We propose a new pre-training task of region matching which allows the model to capture fine-grained region dependencies.
Our results show that combining the two techniques, EsViT achieves 81.3% top-1 on the ImageNet linear probe evaluation.
arXiv Detail & Related papers (2021-06-17T19:57:33Z) - Boosting Deep Transfer Learning for COVID-19 Classification [18.39034705389625]
COVID-19 classification using chest Computed Tomography (CT) has been found pragmatically useful.
It is still unknown if there are better strategies than vanilla transfer learning for more accurate COVID-19 classification with limited CT data.
This paper devises a novel model' augmentation technique that allows a considerable performance boost to transfer learning for the task.
arXiv Detail & Related papers (2021-02-16T11:15:23Z) - Duality Diagram Similarity: a generic framework for initialization
selection in task transfer learning [20.87279811893808]
We propose a new highly efficient and accurate approach based on duality diagram similarity (DDS) between deep neural networks (DNNs)
We validate our approach on the Taskonomy dataset by measuring the correspondence between actual transfer learning performance rankings and predicted rankings.
arXiv Detail & Related papers (2020-08-05T13:00:34Z) - Exploring and Predicting Transferability across NLP Tasks [115.6278033699853]
We study the transferability between 33 NLP tasks across three broad classes of problems.
Our results show that transfer learning is more beneficial than previously thought.
We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task.
arXiv Detail & Related papers (2020-05-02T09:39:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.