Open-Set Hypothesis Transfer with Semantic Consistency
- URL: http://arxiv.org/abs/2010.00292v1
- Date: Thu, 1 Oct 2020 10:44:31 GMT
- Title: Open-Set Hypothesis Transfer with Semantic Consistency
- Authors: Zeyu Feng, Chang Xu and Dacheng Tao
- Abstract summary: We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
- Score: 99.83813484934177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised open-set domain adaptation (UODA) is a realistic problem where
unlabeled target data contain unknown classes. Prior methods rely on the
coexistence of both source and target domain data to perform domain alignment,
which greatly limits their applications when source domain data are restricted
due to privacy concerns. This paper addresses the challenging hypothesis
transfer setting for UODA, where data from source domain are no longer
available during adaptation on target domain. We introduce a method that
focuses on the semantic consistency under transformation of target data, which
is rarely appreciated by previous domain adaptation methods. Specifically, our
model first discovers confident predictions and performs classification with
pseudo-labels. Then we enforce the model to output consistent and definite
predictions on semantically similar inputs. As a result, unlabeled data can be
classified into discriminative classes coincided with either source classes or
unknown classes. Experimental results show that our model outperforms
state-of-the-art methods on UODA benchmarks.
Related papers
- Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised
Domain Adaptation [88.5448806952394]
We consider unsupervised domain adaptation (UDA), where labeled data from a source domain and unlabeled data from a target domain are used to learn a classifier for the target domain.
We show that contrastive pre-training, which learns features on unlabeled source and target data and then fine-tunes on labeled source data, is competitive with strong UDA methods.
arXiv Detail & Related papers (2022-04-01T16:56:26Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Domain Adaptation without Source Data [20.64875162351594]
We introduce Source data-Free Domain Adaptation (SFDA) to avoid accessing source data that may contain sensitive information.
Our key idea is to leverage a pre-trained model from the source domain and progressively update the target model in a self-learning manner.
Our PrDA outperforms conventional domain adaptation methods on benchmark datasets.
arXiv Detail & Related papers (2020-07-03T07:21:30Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.