IT-RUDA: Information Theory Assisted Robust Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2210.12947v1
- Date: Mon, 24 Oct 2022 04:33:52 GMT
- Title: IT-RUDA: Information Theory Assisted Robust Unsupervised Domain
Adaptation
- Authors: Shima Rashidi, Ruwan Tennakoon, Aref Miri Rekavandi, Papangkorn
Jessadatavornwong, Amanda Freis, Garret Huff, Mark Easton, Adrian Mouritz,
Reza Hoseinnezhad, Alireza Bab-Hadiashar
- Abstract summary: Distribution shift between train (source) and test (target) datasets is a common problem encountered in machine learning applications.
UDA technique carries out knowledge transfer from a label-rich source domain to an unlabeled target domain.
Outliers that exist in either source or target datasets can introduce additional challenges when using UDA in practice.
- Score: 7.225445443960775
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Distribution shift between train (source) and test (target) datasets is a
common problem encountered in machine learning applications. One approach to
resolve this issue is to use the Unsupervised Domain Adaptation (UDA) technique
that carries out knowledge transfer from a label-rich source domain to an
unlabeled target domain. Outliers that exist in either source or target
datasets can introduce additional challenges when using UDA in practice. In
this paper, $\alpha$-divergence is used as a measure to minimize the
discrepancy between the source and target distributions while inheriting
robustness, adjustable with a single parameter $\alpha$, as the prominent
feature of this measure. Here, it is shown that the other well-known
divergence-based UDA techniques can be derived as special cases of the proposed
method. Furthermore, a theoretical upper bound is derived for the loss in the
target domain in terms of the source loss and the initial $\alpha$-divergence
between the two domains. The robustness of the proposed method is validated
through testing on several benchmarked datasets in open-set and partial UDA
setups where extra classes existing in target and source datasets are
considered as outliers.
Related papers
- Source-Free Unsupervised Domain Adaptation with Norm and Shape
Constraints for Medical Image Segmentation [0.12183405753834559]
We propose a source-free unsupervised domain adaptation (SFUDA) method for medical image segmentation.
In addition to the entropy minimization method, we introduce a loss function for avoiding feature norms in the target domain small.
Our method outperforms the state-of-the-art in all datasets.
arXiv Detail & Related papers (2022-09-03T00:16:39Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - Understanding the Limits of Unsupervised Domain Adaptation via Data
Poisoning [66.80663779176979]
Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels.
We show the insufficiency of minimizing source domain error and marginal distribution mismatch for a guaranteed reduction in the target domain error.
Motivated from this, we propose novel data poisoning attacks to fool UDA methods into learning representations that produce large target domain errors.
arXiv Detail & Related papers (2021-07-08T15:51:14Z) - Multi-Source domain adaptation via supervised contrastive learning and
confident consistency regularization [0.0]
Multi-Source Unsupervised Domain Adaptation (multi-source UDA) aims to learn a model from several labeled source domains.
We propose Contrastive Multi-Source Domain Adaptation (CMSDA) for multi-source UDA that addresses this limitation.
arXiv Detail & Related papers (2021-06-30T14:39:15Z) - Adapting Off-the-Shelf Source Segmenter for Target Medical Image
Segmentation [12.703234995718372]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to an unlabeled and unseen target domain.
Access to the source domain data at the adaptation stage is often limited, due to data storage or privacy issues.
We propose to adapt an off-the-shelf" segmentation model pre-trained in the source domain to the target domain.
arXiv Detail & Related papers (2021-06-23T16:16:55Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Unsupervised Model Adaptation for Continual Semantic Segmentation [15.820660013260584]
We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain.
We provide theoretical analysis and explain conditions under which our algorithm is effective.
Experiments on benchmark adaptation task demonstrate our method achieves competitive performance even compared with joint UDA approaches.
arXiv Detail & Related papers (2020-09-26T04:55:50Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.