Unknown Sample Discovery for Source Free Open Set Domain Adaptation
- URL: http://arxiv.org/abs/2312.03767v1
- Date: Tue, 5 Dec 2023 20:07:51 GMT
- Title: Unknown Sample Discovery for Source Free Open Set Domain Adaptation
- Authors: Chowdhury Sadman Jahan and Andreas Savakis
- Abstract summary: Open Set Domain Adaptation (OSDA) aims to adapt a model trained on a source domain to a target domain that undergoes distribution shift.
We introduce Unknown Sample Discovery (USD) as an SF-OSDA method that utilizes a temporally ensembled teacher model to conduct known-unknown target sample separation.
- Score: 1.8130068086063336
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Open Set Domain Adaptation (OSDA) aims to adapt a model trained on a source
domain to a target domain that undergoes distribution shift and contains
samples from novel classes outside the source domain. Source-free OSDA
(SF-OSDA) techniques eliminate the need to access source domain samples, but
current SF-OSDA methods utilize only the known classes in the target domain for
adaptation, and require access to the entire target domain even during
inference after adaptation, to make the distinction between known and unknown
samples. In this paper, we introduce Unknown Sample Discovery (USD) as an
SF-OSDA method that utilizes a temporally ensembled teacher model to conduct
known-unknown target sample separation and adapts the student model to the
target domain over all classes using co-training and temporal consistency
between the teacher and the student. USD promotes Jensen-Shannon distance (JSD)
as an effective measure for known-unknown sample separation. Our
teacher-student framework significantly reduces error accumulation resulting
from imperfect known-unknown sample separation, while curriculum guidance helps
to reliably learn the distinction between target known and target unknown
subspaces. USD appends the target model with an unknown class node, thus
readily classifying a target sample into any of the known or unknown classes in
subsequent post-adaptation inference stages. Empirical results show that USD is
superior to existing SF-OSDA methods and is competitive with current OSDA
models that utilize both source and target domains during adaptation.
Related papers
- Recall and Refine: A Simple but Effective Source-free Open-set Domain Adaptation Framework [9.03028904066824]
Open-set Domain Adaptation (OSDA) aims to adapt a model from a labeled source domain to an unlabeled target domain.
We propose Recall and Refine (RRDA), a novel SF-OSDA framework designed to address limitations by explicitly learning features for target-private unknown classes.
arXiv Detail & Related papers (2024-11-19T15:18:50Z) - Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation [22.474866164542302]
UDA approaches commonly assume that source and target domains share the same labels space.
This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting.
We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.
arXiv Detail & Related papers (2024-04-16T13:52:00Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.