Exploiting Inter-Sample Affinity for Knowability-Aware Universal Domain
Adaptation
- URL: http://arxiv.org/abs/2207.09280v5
- Date: Tue, 22 Aug 2023 15:46:12 GMT
- Title: Exploiting Inter-Sample Affinity for Knowability-Aware Universal Domain
Adaptation
- Authors: Yifan Wang and Lin Zhang and Ran Song and Hongliang Li and Paul L.
Rosin and Wei Zhang
- Abstract summary: Universal domain adaptation (UniDA) aims to transfer the knowledge of common classes from the source domain to the target domain without any prior knowledge on the label set.
Recent methods usually focused on categorizing a target sample into one of the source classes rather than distinguishing known and unknown samples.
We propose a novel UDA framework where such inter-sample affinity is exploited.
- Score: 34.5943374866644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Universal domain adaptation (UniDA) aims to transfer the knowledge of common
classes from the source domain to the target domain without any prior knowledge
on the label set, which requires distinguishing in the target domain the
unknown samples from the known ones. Recent methods usually focused on
categorizing a target sample into one of the source classes rather than
distinguishing known and unknown samples, which ignores the inter-sample
affinity between known and unknown samples and may lead to suboptimal
performance. Aiming at this issue, we propose a novel UDA framework where such
inter-sample affinity is exploited. Specifically, we introduce a
knowability-based labeling scheme which can be divided into two steps: 1)
Knowability-guided detection of known and unknown samples based on the
intrinsic structure of the neighborhoods of samples, where we leverage the
first singular vectors of the affinity matrices to obtain the knowability of
every target sample. 2) Label refinement based on neighborhood consistency to
relabel the target samples, where we refine the labels of each target sample
based on its neighborhood consistency of predictions. Then, auxiliary losses
based on the two steps are used to reduce the inter-sample affinity between the
unknown and the known target samples. Finally, experiments on four public
datasets demonstrate that our method significantly outperforms existing
state-of-the-art methods.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - High-order Neighborhoods Know More: HyperGraph Learning Meets Source-free Unsupervised Domain Adaptation [34.08681468394247]
Source-free Unsupervised Domain Adaptation aims to classify target samples by only accessing a pre-trained source model and unlabelled target samples.
Existing methods normally exploit the pair-wise relation among target samples and attempt to discover their correlations by clustering these samples based on semantic features.
We propose a new SFDA method that exploits the high-order neighborhood relation and explicitly takes the domain shift effect into account.
arXiv Detail & Related papers (2024-05-11T05:07:43Z) - Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation [22.474866164542302]
UDA approaches commonly assume that source and target domains share the same labels space.
This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting.
We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.
arXiv Detail & Related papers (2024-04-16T13:52:00Z) - Probabilistic Test-Time Generalization by Variational Neighbor-Labeling [62.158807685159736]
This paper strives for domain generalization, where models are trained exclusively on source domains before being deployed on unseen target domains.
Probability pseudo-labeling of target samples to generalize the source-trained model to the target domain at test time.
Variational neighbor labels that incorporate the information of neighboring target samples to generate more robust pseudo labels.
arXiv Detail & Related papers (2023-07-08T18:58:08Z) - Ambiguity-Resistant Semi-Supervised Learning for Dense Object Detection [98.66771688028426]
We propose a Ambiguity-Resistant Semi-supervised Learning (ARSL) for one-stage detectors.
Joint-Confidence Estimation (JCE) is proposed to quantifies the classification and localization quality of pseudo labels.
ARSL effectively mitigates the ambiguities and achieves state-of-the-art SSOD performance on MS COCO and PASCAL VOC.
arXiv Detail & Related papers (2023-03-27T07:46:58Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Learning Classifiers of Prototypes and Reciprocal Points for Universal
Domain Adaptation [79.62038105814658]
Universal Domain aims to transfer the knowledge between datasets by handling two shifts: domain-shift and categoryshift.
Main challenge is correctly distinguishing the unknown target samples while adapting the distribution of known class knowledge from source to target.
Most existing methods approach this problem by first training the target adapted known and then relying on the single threshold to distinguish unknown target samples.
arXiv Detail & Related papers (2022-12-16T09:01:57Z) - Provably Uncertainty-Guided Universal Domain Adaptation [34.76381510773768]
Universal domain adaptation (UniDA) aims to transfer the knowledge from a labeled source domain to an unlabeled target domain.
A main challenge of UniDA is that the nonidentical label sets cause the misalignment between the two domains.
We propose a new uncertainty-guided UniDA framework, which exploits the distribution of the target samples in the latent space.
arXiv Detail & Related papers (2022-09-19T09:16:07Z) - Cross-Domain Gradient Discrepancy Minimization for Unsupervised Domain
Adaptation [22.852237073492894]
Unsupervised Domain Adaptation (UDA) aims to generalize the knowledge learned from a well-labeled source domain to an unlabeled target domain.
We propose a cross-domain discrepancy minimization (CGDM) method which explicitly minimizes the discrepancy of gradients generated by source samples and target samples.
In order to compute the gradient signal of target samples, we further obtain target pseudo labels through a clustering-based self-supervised learning.
arXiv Detail & Related papers (2021-06-08T07:35:40Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.