OVANet: One-vs-All Network for Universal Domain Adaptation
- URL: http://arxiv.org/abs/2104.03344v1
- Date: Wed, 7 Apr 2021 18:36:31 GMT
- Title: OVANet: One-vs-All Network for Universal Domain Adaptation
- Authors: Kuniaki Saito and Kate Saenko
- Abstract summary: Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
- Score: 78.86047802107025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Universal Domain Adaptation (UNDA) aims to handle both domain-shift and
category-shift between two datasets, where the main challenge is to transfer
knowledge while rejecting unknown classes which are absent in the labeled
source data but present in the unlabeled target data. Existing methods manually
set a threshold to reject unknown samples based on validation or a pre-defined
ratio of unknown samples, but this strategy is not practical. In this paper, we
propose a method to learn the threshold using source samples and to adapt it to
the target domain. Our idea is that a minimum inter-class distance in the
source domain should be a good threshold to decide between known or unknown in
the target. To learn the inter-and intra-class distance, we propose to train a
one-vs-all classifier for each class using labeled source data. Then, we adapt
the open-set classifier to the target domain by minimizing class entropy. The
resulting framework is the simplest of all baselines of UNDA and is insensitive
to the value of a hyper-parameter yet outperforms baselines with a large
margin.
Related papers
- Upcycling Models under Domain and Category Shift [95.22147885947732]
We introduce an innovative global and local clustering learning technique (GLC)
We design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes.
Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8% on the VisDA benchmark.
arXiv Detail & Related papers (2023-03-13T13:44:04Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Conditional Extreme Value Theory for Open Set Video Domain Adaptation [17.474956295874797]
We propose an open-set video domain adaptation approach to mitigate the domain discrepancy between the source and target data.
To alleviate the negative transfer issue, weights computed by the distance from the sample entropy to the threshold are leveraged in adversarial learning.
The proposed method has been thoroughly evaluated on both small-scale and large-scale cross-domain video datasets.
arXiv Detail & Related papers (2021-09-01T10:51:50Z) - Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation [22.852237073492894]
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
arXiv Detail & Related papers (2021-06-08T07:35:40Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Divergence Optimization for Noisy Universal Domain Adaptation [32.05829135903389]
Universal domain adaptation (UniDA) has been proposed to transfer knowledge learned from a label-rich source domain to a label-scarce target domain.
This paper introduces a two-head convolutional neural network framework to solve all problems simultaneously.
arXiv Detail & Related papers (2021-04-01T04:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.