RAIN: RegulArization on Input and Network for Black-Box Domain
Adaptation
- URL: http://arxiv.org/abs/2208.10531v4
- Date: Sat, 19 Aug 2023 00:04:11 GMT
- Title: RAIN: RegulArization on Input and Network for Black-Box Domain
Adaptation
- Authors: Qucheng Peng, Zhengming Ding, Lingjuan Lyu, Lichao Sun, Chen Chen
- Abstract summary: Source-free domain adaptation transits the source-trained model towards target domain without exposing the source data.
This paradigm is still at risk of data leakage due to adversarial attacks on the source model.
We propose a novel approach named RAIN (RegulArization on Input and Network) for Black-Box domain adaptation from both input-level and network-level regularization.
- Score: 80.03883315743715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Source-Free domain adaptation transits the source-trained model towards
target domain without exposing the source data, trying to dispel these concerns
about data privacy and security. However, this paradigm is still at risk of
data leakage due to adversarial attacks on the source model. Hence, the
Black-Box setting only allows to use the outputs of source model, but still
suffers from overfitting on the source domain more severely due to source
model's unseen weights. In this paper, we propose a novel approach named RAIN
(RegulArization on Input and Network) for Black-Box domain adaptation from both
input-level and network-level regularization. For the input-level, we design a
new data augmentation technique as Phase MixUp, which highlights task-relevant
objects in the interpolations, thus enhancing input-level regularization and
class consistency for target models. For network-level, we develop a Subnetwork
Distillation mechanism to transfer knowledge from the target subnetwork to the
full target network via knowledge distillation, which thus alleviates
overfitting on the source domain by learning diverse target representations.
Extensive experiments show that our method achieves state-of-the-art
performance on several cross-domain benchmarks under both single- and
multi-source black-box domain adaptation.
Related papers
- Rethinking the Role of Pre-Trained Networks in Source-Free Domain
Adaptation [26.481422574715126]
Source-free domain adaptation (SFDA) aims to adapt a source model trained on a fully-labeled source domain to an unlabeled target domain.
Large-data pre-trained networks are used to initialize source models during source training, and subsequently discarded.
We propose to integrate the pre-trained network into the target adaptation process as it has diversified features important for generalization.
arXiv Detail & Related papers (2022-12-15T02:25:22Z) - Unsupervised Domain Adaptation for Segmentation with Black-box Source
Model [37.02365343894657]
We propose a practical solution to UDA for segmentation with a black-box segmentation model trained in the source domain only.
Specifically, we resort to a knowledge distillation scheme with exponential mixup decay (EMD) to gradually learn target-specific representations.
We evaluate our framework on the BraTS 2018 database, achieving performance on par with white-box source model adaptation approaches.
arXiv Detail & Related papers (2022-08-16T14:29:15Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Transformer-Based Source-Free Domain Adaptation [134.67078085569017]
We study the task of source-free domain adaptation (SFDA), where the source data are not available during target adaptation.
We propose a generic and effective framework based on Transformer, named TransDA, for learning a generalized model for SFDA.
arXiv Detail & Related papers (2021-05-28T23:06:26Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.