Against Adversarial Learning: Naturally Distinguish Known and Unknown in
Open Set Domain Adaptation
- URL: http://arxiv.org/abs/2011.02876v1
- Date: Wed, 4 Nov 2020 10:30:43 GMT
- Title: Against Adversarial Learning: Naturally Distinguish Known and Unknown in
Open Set Domain Adaptation
- Authors: Sitong Mao, Xiao Shen, Fu-lai Chung
- Abstract summary: Open set domain adaptation refers to the scenario that the target domain contains categories that do not exist in the source domain.
We propose an "against adversarial learning" method that can distinguish unknown target data and known data naturally.
Experimental results show that the proposed method can make significant improvement in performance compared with several state-of-the-art methods.
- Score: 17.819949636876018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Open set domain adaptation refers to the scenario that the target domain
contains categories that do not exist in the source domain. It is a more common
situation in the reality compared with the typical closed set domain adaptation
where the source domain and the target domain contain the same categories. The
main difficulty of open set domain adaptation is that we need to distinguish
which target data belongs to the unknown classes when machine learning models
only have concepts about what they know. In this paper, we propose an "against
adversarial learning" method that can distinguish unknown target data and known
data naturally without setting any additional hyper parameters and the target
data predicted to the known classes can be classified at the same time.
Experimental results show that the proposed method can make significant
improvement in performance compared with several state-of-the-art methods.
Related papers
- Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Cross-domain Transfer of defect features in technical domains based on
partial target data [0.0]
In many technical domains, it is only the defect or worn reject classes that are insufficiently represented.
The proposed classification approach addresses such conditions and is based on a CNN encoder.
It is benchmarked in a technical and a non-technical domain and shows convincing classification results.
arXiv Detail & Related papers (2022-11-24T15:23:58Z) - Open Set Domain Recognition via Attention-Based GCN and Semantic
Matching Optimization [8.831857715361624]
This work presents an end-to-end model based on attention-based GCN and semantic matching optimization.
Experimental results validate that the proposed model not only has superiority on recognizing the images of known and unknown classes, but also can adapt to various openness of the target domain.
arXiv Detail & Related papers (2021-05-11T12:05:36Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Open Set Domain Adaptation by Extreme Value Theory [22.826118321715455]
We tackle the open set domain adaptation problem under the assumption that the source and the target label spaces only partially overlap.
We propose an instance-level reweighting strategy for domain adaptation where the weights indicate the likelihood of a sample belonging to known classes.
Experiments on conventional domain adaptation datasets show that the proposed method outperforms the state-of-the-art models.
arXiv Detail & Related papers (2020-12-22T19:31:32Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Exploring Category-Agnostic Clusters for Open-Set Domain Adaptation [138.29273453811945]
We present Self-Ensembling with Category-agnostic Clusters (SE-CC) -- a novel architecture that steers domain adaptation with category-agnostic clusters in target domain.
clustering is performed over all the unlabeled target samples to obtain the category-agnostic clusters, which reveal the underlying data space structure peculiar to target domain.
arXiv Detail & Related papers (2020-06-11T16:19:02Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - Universal Domain Adaptation through Self Supervision [75.04598763659969]
Unsupervised domain adaptation methods assume that all source categories are present in the target domain.
We propose Domain Adaptative Neighborhood Clustering via Entropy optimization (DANCE) to handle arbitrary category shift.
We show through extensive experiments that DANCE outperforms baselines across open-set, open-partial and partial domain adaptation settings.
arXiv Detail & Related papers (2020-02-19T01:26:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.