On Universal Black-Box Domain Adaptation
- URL: http://arxiv.org/abs/2104.04665v1
- Date: Sat, 10 Apr 2021 02:21:09 GMT
- Title: On Universal Black-Box Domain Adaptation
- Authors: Bin Deng, Yabin Zhang, Hui Tang, Changxing Ding, Kui Jia
- Abstract summary: We study an arguably least restrictive setting of domain adaptation in a sense of practical deployment.
Only the interface of source model is available to the target domain, and where the label-space relations between the two domains are allowed to be different and unknown.
We propose to unify them into a self-training framework, regularized by consistency of predictions in local neighborhoods of target samples.
- Score: 53.7611757926922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study an arguably least restrictive setting of domain
adaptation in a sense of practical deployment, where only the interface of
source model is available to the target domain, and where the label-space
relations between the two domains are allowed to be different and unknown. We
term such a setting as Universal Black-Box Domain Adaptation (UB$^2$DA). The
great promise that UB$^2$DA makes, however, brings significant learning
challenges, since domain adaptation can only rely on the predictions of
unlabeled target data in a partially overlapped label space, by accessing the
interface of source model. To tackle the challenges, we first note that the
learning task can be converted as two subtasks of in-class\footnote{In this
paper we use in-class (out-class) to describe the classes observed (not
observed) in the source black-box model.} discrimination and out-class
detection, which can be respectively learned by model distillation and entropy
separation. We propose to unify them into a self-training framework,
regularized by consistency of predictions in local neighborhoods of target
samples. Our framework is simple, robust, and easy to be optimized. Experiments
on domain adaptation benchmarks show its efficacy. Notably, by accessing the
interface of source model only, our framework outperforms existing methods of
universal domain adaptation that make use of source data and/or source models,
with a newly proposed (and arguably more reasonable) metric of H-score, and
performs on par with them with the metric of averaged class accuracy.
Related papers
- Memory-Efficient Pseudo-Labeling for Online Source-Free Universal Domain Adaptation using a Gaussian Mixture Model [3.1265626879839923]
Universal domain adaptation (UniDA) has gained attention for addressing the possibility of an additional category (label) shift between the source and target domain.
We propose a novel method that continuously captures the distribution of known classes in the feature space using a Gaussian mixture model (GMM)
Our approach not only achieves state-of-the-art results in all experiments on the DomainNet dataset but also significantly outperforms the existing methods on the challenging VisDA-C dataset.
arXiv Detail & Related papers (2024-07-19T11:13:31Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Domain Adaptation via Prompt Learning [39.97105851723885]
Unsupervised domain adaption (UDA) aims to adapt models learned from a well-annotated source domain to a target domain.
We introduce a novel prompt learning paradigm for UDA, named Domain Adaptation via Prompt Learning (DAPL)
arXiv Detail & Related papers (2022-02-14T13:25:46Z) - OVANet: One-vs-All Network for Universal Domain Adaptation [78.86047802107025]
Existing methods manually set a threshold to reject unknown samples based on validation or a pre-defined ratio of unknown samples.
We propose a method to learn the threshold using source samples and to adapt it to the target domain.
Our idea is that a minimum inter-class distance in the source domain should be a good threshold to decide between known or unknown in the target.
arXiv Detail & Related papers (2021-04-07T18:36:31Z) - Domain Adaptation Using Class Similarity for Robust Speech Recognition [24.951852740214413]
This paper proposes a novel adaptation method for deep neural network (DNN) acoustic model using class similarity.
Experiments showed that our approach outperforms fine-tuning using one-hot labels on both accent and noise adaptation task.
arXiv Detail & Related papers (2020-11-05T12:26:43Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z) - Enlarging Discriminative Power by Adding an Extra Class in Unsupervised
Domain Adaptation [5.377369521932011]
We propose an idea of empowering the discriminativeness: Adding a new, artificial class and training the model on the data together with the GAN-generated samples of the new class.
Our idea is highly generic so that it is compatible with many existing methods such as DANN, VADA, and DIRT-T.
arXiv Detail & Related papers (2020-02-19T07:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.