Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain
Adaptation
- URL: http://arxiv.org/abs/2006.13022v1
- Date: Tue, 23 Jun 2020 14:01:06 GMT
- Title: Bridging the Theoretical Bound and Deep Algorithms for Open Set Domain
Adaptation
- Authors: Li Zhong, Zhen Fang, Feng Liu, Bo Yuan, Guangquan Zhang, Jie Lu
- Abstract summary: In the unsupervised open set domain adaptation (UOSDA), the target domain contains unknown classes that are not observed in the source domain.
We propose a new upper bound of target-domain risk for UOSDA, which includes four terms: source-domain risk, $epsilon$-open set difference ($Delta_epsilon$), a distributional discrepancy between domains, and a constant.
Specifically, source-domain risk and $Delta_epsilon$ are minimized by gradient descent, and the distributional discrepancy is minimized via a novel open-set conditional adversarial training strategy.
- Score: 40.95099721257058
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the unsupervised open set domain adaptation (UOSDA), the target domain
contains unknown classes that are not observed in the source domain.
Researchers in this area aim to train a classifier to accurately: 1) recognize
unknown target data (data with unknown classes) and, 2) classify other target
data. To achieve this aim, a previous study has proven an upper bound of the
target-domain risk, and the open set difference, as an important term in the
upper bound, is used to measure the risk on unknown target data. By minimizing
the upper bound, a shallow classifier can be trained to achieve the aim.
However, if the classifier is very flexible (e.g., deep neural networks
(DNNs)), the open set difference will converge to a negative value when
minimizing the upper bound, which causes an issue where most target data are
recognized as unknown data. To address this issue, we propose a new upper bound
of target-domain risk for UOSDA, which includes four terms: source-domain risk,
$\epsilon$-open set difference ($\Delta_\epsilon$), a distributional
discrepancy between domains, and a constant. Compared to the open set
difference, $\Delta_\epsilon$ is more robust against the issue when it is being
minimized, and thus we are able to use very flexible classifiers (i.e., DNNs).
Then, we propose a new principle-guided deep UOSDA method that trains DNNs via
minimizing the new upper bound. Specifically, source-domain risk and
$\Delta_\epsilon$ are minimized by gradient descent, and the distributional
discrepancy is minimized via a novel open-set conditional adversarial training
strategy. Finally, compared to existing shallow and deep UOSDA methods, our
method shows the state-of-the-art performance on several benchmark datasets,
including digit recognition (MNIST, SVHN, USPS), object recognition (Office-31,
Office-Home), and face recognition (PIE).
Related papers
- Open-Set Domain Adaptation for Semantic Segmentation [6.3951361316638815]
We introduce Open-Set Domain Adaptation for Semantic (OSDA-SS) for the first time, where the target domain includes unknown classes.
To address these issues, we propose Boundary and Unknown Shape-Aware open-set domain adaptation, coined BUS.
Our BUS can accurately discern the boundaries between known and unknown classes in a contrastive manner using a novel dilation-erosion-based contrastive loss.
arXiv Detail & Related papers (2024-05-30T09:55:19Z) - Activate and Reject: Towards Safe Domain Generalization under Category
Shift [71.95548187205736]
We study a practical problem of Domain Generalization under Category Shift (DGCS)
It aims to simultaneously detect unknown-class samples and classify known-class samples in the target domains.
Compared to prior DG works, we face two new challenges: 1) how to learn the concept of unknown'' during training with only source known-class samples, and 2) how to adapt the source-trained model to unseen environments.
arXiv Detail & Related papers (2023-10-07T07:53:12Z) - Upcycling Models under Domain and Category Shift [95.22147885947732]
We introduce an innovative global and local clustering learning technique (GLC)
We design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes.
Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8% on the VisDA benchmark.
arXiv Detail & Related papers (2023-03-13T13:44:04Z) - IT-RUDA: Information Theory Assisted Robust Unsupervised Domain
Adaptation [7.225445443960775]
Distribution shift between train (source) and test (target) datasets is a common problem encountered in machine learning applications.
UDA technique carries out knowledge transfer from a label-rich source domain to an unlabeled target domain.
Outliers that exist in either source or target datasets can introduce additional challenges when using UDA in practice.
arXiv Detail & Related papers (2022-10-24T04:33:52Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Certainty Volume Prediction for Unsupervised Domain Adaptation [35.984559137218504]
Unsupervised domain adaptation (UDA) deals with the problem of classifying unlabeled target domain data.
We propose a novel uncertainty-aware domain adaptation setup that models uncertainty as a multivariate Gaussian distribution in feature space.
We evaluate our proposed pipeline on challenging UDA datasets and achieve state-of-the-art results.
arXiv Detail & Related papers (2021-11-03T11:22:55Z) - Conditional Extreme Value Theory for Open Set Video Domain Adaptation [17.474956295874797]
We propose an open-set video domain adaptation approach to mitigate the domain discrepancy between the source and target data.
To alleviate the negative transfer issue, weights computed by the distance from the sample entropy to the threshold are leveraged in adversarial learning.
The proposed method has been thoroughly evaluated on both small-scale and large-scale cross-domain video datasets.
arXiv Detail & Related papers (2021-09-01T10:51:50Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.