Black-box Probe for Unsupervised Domain Adaptation without Model
Transferring
- URL: http://arxiv.org/abs/2107.10174v1
- Date: Wed, 21 Jul 2021 16:00:51 GMT
- Title: Black-box Probe for Unsupervised Domain Adaptation without Model
Transferring
- Authors: Kunhong Wu, Yucheng Shi, Yahong Han, Yunfeng Shao, Bingshuai Li
- Abstract summary: Unsupervised domain adaptation (UDA) methods can achieve promising performance without transferring data from source domain to target domain.
In many data-critical scenarios, methods based on model transferring may suffer from membership inference attacks and expose private data.
We propose Black-box Probe Domain Adaptation (BPDA), which adopts query mechanism to probe and refine information from source model using third-party dataset.
- Score: 30.55386853475974
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, researchers have been paying increasing attention to the
threats brought by deep learning models to data security and privacy,
especially in the field of domain adaptation. Existing unsupervised domain
adaptation (UDA) methods can achieve promising performance without transferring
data from source domain to target domain. However, UDA with representation
alignment or self-supervised pseudo-labeling relies on the transferred source
models. In many data-critical scenarios, methods based on model transferring
may suffer from membership inference attacks and expose private data. In this
paper, we aim to overcome a challenging new setting where the source models are
only queryable but cannot be transferred to the target domain. We propose
Black-box Probe Domain Adaptation (BPDA), which adopts query mechanism to probe
and refine information from source model using third-party dataset. In order to
gain more informative query results, we further propose Distributionally
Adversarial Training (DAT) to align the distribution of third-party data with
that of target data. BPDA uses public third-party dataset and adversarial
examples based on DAT as the information carrier between source and target
domains, dispensing with transferring source data or model. Experimental
results on benchmarks of Digit-Five, Office-Caltech, Office-31, Office-Home,
and DomainNet demonstrate the feasibility of BPDA without model transferring.
Related papers
- Transcending Domains through Text-to-Image Diffusion: A Source-Free
Approach to Domain Adaptation [6.649910168731417]
Domain Adaptation (DA) is a method for enhancing a model's performance on a target domain with inadequate annotated data.
We propose a novel framework for SFDA that generates source data using a text-to-image diffusion model trained on the target domain samples.
arXiv Detail & Related papers (2023-10-02T23:38:17Z) - A Prototype-Oriented Clustering for Domain Shift with Source Privacy [66.67700676888629]
We introduce Prototype-oriented Clustering with Distillation (PCD) to improve the performance and applicability of existing methods.
PCD first constructs a source clustering model by aligning the distributions of prototypes and data.
It then distills the knowledge to the target model through cluster labels provided by the source model while simultaneously clustering the target data.
arXiv Detail & Related papers (2023-02-08T00:15:35Z) - Domain Alignment Meets Fully Test-Time Adaptation [24.546705919244936]
A foundational requirement of a deployed ML model is to generalize to data drawn from a testing distribution that is different from training.
In this paper, we focus on a challenging variant of this problem, where access to the original source data is restricted.
We propose a new approach, CATTAn, that bridges UDA and FTTA, by relaxing the need to access entire source data.
arXiv Detail & Related papers (2022-07-09T03:17:19Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.