Jacobian Norm for Unsupervised Source-Free Domain Adaptation
- URL: http://arxiv.org/abs/2204.03467v1
- Date: Thu, 7 Apr 2022 14:30:48 GMT
- Title: Jacobian Norm for Unsupervised Source-Free Domain Adaptation
- Authors: Weikai Li, Meng Cao and Songcan Chen
- Abstract summary: Free domain adaptation (USFDA) aims to transfer knowledge from a well-trained source model to a related but unlabeled target domain.
Existing USFDAs turn to transfer knowledge by aligning the target feature to the latent distribution hidden in the source model.
We propose a new perspective to boost their performance.
- Score: 37.958884762225814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised Source (data) Free domain adaptation (USFDA) aims to transfer
knowledge from a well-trained source model to a related but unlabeled target
domain. In such a scenario, all conventional adaptation methods that require
source data fail. To combat this challenge, existing USFDAs turn to transfer
knowledge by aligning the target feature to the latent distribution hidden in
the source model. However, such information is naturally limited. Thus, the
alignment in such a scenario is not only difficult but also insufficient, which
degrades the target generalization performance. To relieve this dilemma in
current USFDAs, we are motivated to explore a new perspective to boost their
performance. For this purpose and gaining necessary insight, we look back upon
the origin of the domain adaptation and first theoretically derive a new-brand
target generalization error bound based on the model smoothness. Then,
following the theoretical insight, a general and model-smoothness-guided
Jacobian norm (JN) regularizer is designed and imposed on the target domain to
mitigate this dilemma. Extensive experiments are conducted to validate its
effectiveness. In its implementation, just with a few lines of codes added to
the existing USFDAs, we achieve superior results on various benchmark datasets.
Related papers
- Unified Source-Free Domain Adaptation [44.95240684589647]
In pursuit of transferring a source model to a target domain without access to the source training data, Source-Free Domain Adaptation (SFDA) has been extensively explored.
We propose a novel approach called Latent Causal Factors Discovery (LCFD)
In contrast to previous alternatives that emphasize learning the statistical description of reality, we formulate LCFD from a causality perspective.
arXiv Detail & Related papers (2024-03-12T12:40:08Z) - Continual Source-Free Unsupervised Domain Adaptation [37.060694803551534]
Existing Source-free Unsupervised Domain Adaptation approaches exhibit catastrophic forgetting.
We propose a Continual SUDA (C-SUDA) framework to cope with the challenge of SUDA in a continual learning setting.
arXiv Detail & Related papers (2023-04-14T20:11:05Z) - Uncertainty-guided Source-free Domain Adaptation [77.3844160723014]
Source-free domain adaptation (SFDA) aims to adapt a classifier to an unlabelled target data set by only using a pre-trained source model.
We propose quantifying the uncertainty in the source model predictions and utilizing it to guide the target adaptation.
arXiv Detail & Related papers (2022-08-16T08:03:30Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.