Prior Knowledge Guided Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2207.08877v1
- Date: Mon, 18 Jul 2022 18:41:36 GMT
- Title: Prior Knowledge Guided Unsupervised Domain Adaptation
- Authors: Tao Sun, Cheng Lu, Haibin Ling
- Abstract summary: We propose a Knowledge-guided Unsupervised Domain Adaptation (KUDA) setting where prior knowledge about the target class distribution is available.
In particular, we consider two specific types of prior knowledge about the class distribution in the target domain: Unary Bound and Binary Relationship.
We propose a rectification module that uses such prior knowledge to refine model generated pseudo labels.
- Score: 82.9977759320565
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The waive of labels in the target domain makes Unsupervised Domain Adaptation
(UDA) an attractive technique in many real-world applications, though it also
brings great challenges as model adaptation becomes harder without labeled
target data. In this paper, we address this issue by seeking compensation from
target domain prior knowledge, which is often (partially) available in
practice, e.g., from human expertise. This leads to a novel yet practical
setting where in addition to the training data, some prior knowledge about the
target class distribution are available. We term the setting as
Knowledge-guided Unsupervised Domain Adaptation (KUDA). In particular, we
consider two specific types of prior knowledge about the class distribution in
the target domain: Unary Bound that describes the lower and upper bounds of
individual class probabilities, and Binary Relationship that describes the
relations between two class probabilities. We propose a general rectification
module that uses such prior knowledge to refine model generated pseudo labels.
The module is formulated as a Zero-One Programming problem derived from the
prior knowledge and a smooth regularizer. It can be easily plugged into
self-training based UDA methods, and we combine it with two state-of-the-art
methods, SHOT and DINE. Empirical results on four benchmarks confirm that the
rectification module clearly improves the quality of pseudo labels, which in
turn benefits the self-training stage. With the guidance from prior knowledge,
the performances of both methods are substantially boosted. We expect our work
to inspire further investigations in integrating prior knowledge in UDA. Code
is available at https://github.com/tsun/KUDA.
Related papers
- Open-Set Domain Adaptation for Semantic Segmentation [6.3951361316638815]
We introduce Open-Set Domain Adaptation for Semantic (OSDA-SS) for the first time, where the target domain includes unknown classes.
To address these issues, we propose Boundary and Unknown Shape-Aware open-set domain adaptation, coined BUS.
Our BUS can accurately discern the boundaries between known and unknown classes in a contrastive manner using a novel dilation-erosion-based contrastive loss.
arXiv Detail & Related papers (2024-05-30T09:55:19Z) - Test-Time Domain Adaptation by Learning Domain-Aware Batch Normalization [39.14048972373775]
Test-time domain adaptation aims to adapt the model trained on source domains to unseen target domains using a few unlabeled images.
Previous works normally update the whole network naively without explicitly decoupling the knowledge between label and domain.
We propose to reduce such learning interference and elevate the domain knowledge learning by only manipulating the BN layer.
arXiv Detail & Related papers (2023-12-15T19:22:21Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - On Universal Black-Box Domain Adaptation [53.7611757926922]
We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
arXiv Detail & Related papers (2021-04-10T02:21:09Z) - Adaptive Pseudo-Label Refinement by Negative Ensemble Learning for
Source-Free Unsupervised Domain Adaptation [35.728603077621564]
Existing Unsupervised Domain Adaptation (UDA) methods presumes source and target domain data to be simultaneously available during training.
A pre-trained source model is always considered to be available, even though performing poorly on target due to the well-known domain shift problem.
We propose a unified method to tackle adaptive noise filtering and pseudo-label refinement.
arXiv Detail & Related papers (2021-03-29T22:18:34Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Domain Adaption for Knowledge Tracing [65.86619804954283]
We propose a novel adaptable framework, namely knowledge tracing (AKT) to address the DAKT problem.
For the first aspect, we incorporate the educational characteristics (e.g., slip, guess, question texts) based on the deep knowledge tracing (DKT) to obtain a good performed knowledge tracing model.
For the second aspect, we propose and adopt three domain adaptation processes. First, we pre-train an auto-encoder to select useful source instances for target model training.
arXiv Detail & Related papers (2020-01-14T15:04:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.