Unsupervised Transfer Learning with Self-Supervised Remedy
- URL: http://arxiv.org/abs/2006.04737v1
- Date: Mon, 8 Jun 2020 16:42:17 GMT
- Title: Unsupervised Transfer Learning with Self-Supervised Remedy
- Authors: Jiabo Huang and Shaogang Gong
- Abstract summary: Generalising deep networks to novel domains without manual labels is challenging to deep learning.
Pre-learned knowledge does not transfer well without making strong assumptions about the learned and the novel domains.
In this work, we aim to learn a discriminative latent space of the unlabelled target data in a novel domain by knowledge transfer from labelled related domains.
- Score: 60.315835711438936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalising deep networks to novel domains without manual labels is
challenging to deep learning. This problem is intrinsically difficult due to
unpredictable changing nature of imagery data distributions in novel domains.
Pre-learned knowledge does not transfer well without making strong assumptions
about the learned and the novel domains. Different methods have been studied to
address the underlying problem based on different assumptions, e.g. from domain
adaptation to zero-shot and few-shot learning. In this work, we address this
problem by transfer clustering that aims to learn a discriminative latent space
of the unlabelled target data in a novel domain by knowledge transfer from
labelled related domains. Specifically, we want to leverage relative (pairwise)
imagery information, which is freely available and intrinsic to a target
domain, to model the target domain image distribution characteristics as well
as the prior-knowledge learned from related labelled domains to enable more
discriminative clustering of unlabelled target data. Our method mitigates
nontransferrable prior-knowledge by self-supervision, benefiting from both
transfer and self-supervised learning. Extensive experiments on four datasets
for image clustering tasks reveal the superiority of our model over the
state-of-the-art transfer clustering techniques. We further demonstrate its
competitive transferability on four zero-shot learning benchmarks.
Related papers
- CDFSL-V: Cross-Domain Few-Shot Learning for Videos [58.37446811360741]
Few-shot video action recognition is an effective approach to recognizing new categories with only a few labeled examples.
Existing methods in video action recognition rely on large labeled datasets from the same domain.
We propose a novel cross-domain few-shot video action recognition method that leverages self-supervised learning and curriculum learning.
arXiv Detail & Related papers (2023-09-07T19:44:27Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Bridge to Target Domain by Prototypical Contrastive Learning and Label
Confusion: Re-explore Zero-Shot Learning for Slot Filling [18.19818129121059]
Cross-domain slot filling alleviates the data dependence in the case of data scarcity in the target domain.
We propose a novel approach based on prototypical contrastive learning with a dynamic label confusion strategy for zero-shot slot filling.
Our model achieves significant improvement on the unseen slots, while also set new state-of-the-arts on slot filling task.
arXiv Detail & Related papers (2021-10-07T15:50:56Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Detecting Bias in Transfer Learning Approaches for Text Classification [3.968023038444605]
In a supervised learning setting, labels are always needed for the classification task.
In this work, we evaluate some existing transfer learning approaches on detecting the bias of imbalanced classes.
arXiv Detail & Related papers (2021-02-03T15:48:21Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - CSCL: Critical Semantic-Consistent Learning for Unsupervised Domain
Adaptation [42.226842513334184]
We develop a new Critical Semantic-Consistent Learning model, which mitigates the discrepancy of both domain-wise and category-wise distributions.
Specifically, a critical transfer based adversarial framework is designed to highlight transferable domain-wise knowledge while neglecting untransferable knowledge.
arXiv Detail & Related papers (2020-08-24T14:12:04Z) - Physically-Constrained Transfer Learning through Shared Abundance Space
for Hyperspectral Image Classification [14.840925517957258]
We propose a new transfer learning scheme to bridge the gap between the source and target domains.
The proposed method is referred to as physically-constrained transfer learning through shared abundance space.
arXiv Detail & Related papers (2020-08-19T17:41:37Z) - Unsupervised Cross-domain Image Classification by Distance Metric Guided
Feature Alignment [11.74643883335152]
Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain.
We propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains.
Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain.
arXiv Detail & Related papers (2020-08-19T13:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.