Understanding the Limits of Unsupervised Domain Adaptation via Data
Poisoning
- URL: http://arxiv.org/abs/2107.03919v1
- Date: Thu, 8 Jul 2021 15:51:14 GMT
- Title: Understanding the Limits of Unsupervised Domain Adaptation via Data
Poisoning
- Authors: Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen and Jihun Hamm
- Abstract summary: Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels.
We show the insufficiency of minimizing source domain error and marginal distribution mismatch for a guaranteed reduction in the target domain error.
Motivated from this, we propose novel data poisoning attacks to fool UDA methods into learning representations that produce large target domain errors.
- Score: 66.80663779176979
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation (UDA) enables cross-domain learning without
target domain labels by transferring knowledge from a labeled source domain
whose distribution differs from the target. However, UDA is not always
successful and several accounts of "negative transfer" have been reported in
the literature. In this work, we prove a simple lower bound on the target
domain error that complements the existing upper bound. Our bound shows the
insufficiency of minimizing source domain error and marginal distribution
mismatch for a guaranteed reduction in the target domain error, due to the
possible increase of induced labeling function mismatch. This insufficiency is
further illustrated through simple distributions for which the same UDA
approach succeeds, fails, and may succeed or fail with an equal chance.
Motivated from this, we propose novel data poisoning attacks to fool UDA
methods into learning representations that produce large target domain errors.
We evaluate the effect of these attacks on popular UDA methods using benchmark
datasets where they have been previously shown to be successful. Our results
show that poisoning can significantly decrease the target domain accuracy,
dropping it to almost 0\% in some cases, with the addition of only 10\%
poisoned data in the source domain. The failure of UDA methods demonstrates the
limitations of UDA at guaranteeing cross-domain generalization consistent with
the lower bound. Thus, evaluation of UDA methods in adversarial settings such
as data poisoning can provide a better sense of their robustness in scenarios
unfavorable for UDA.
Related papers
- Imbalanced Open Set Domain Adaptation via Moving-threshold Estimation
and Gradual Alignment [58.56087979262192]
Open Set Domain Adaptation (OSDA) aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain.
The performance of OSDA methods degrades drastically under intra-domain class imbalance and inter-domain label shift.
We propose Open-set Moving-threshold Estimation and Gradual Alignment (OMEGA) to alleviate the negative effects raised by label shift.
arXiv Detail & Related papers (2023-03-08T05:55:02Z) - IT-RUDA: Information Theory Assisted Robust Unsupervised Domain
Adaptation [7.225445443960775]
Distribution shift between train (source) and test (target) datasets is a common problem encountered in machine learning applications.
UDA technique carries out knowledge transfer from a label-rich source domain to an unlabeled target domain.
Outliers that exist in either source or target datasets can introduce additional challenges when using UDA in practice.
arXiv Detail & Related papers (2022-10-24T04:33:52Z) - Source-Free Unsupervised Domain Adaptation with Norm and Shape
Constraints for Medical Image Segmentation [0.12183405753834559]
We propose a source-free unsupervised domain adaptation (SFUDA) method for medical image segmentation.
In addition to the entropy minimization method, we introduce a loss function for avoiding feature norms in the target domain small.
Our method outperforms the state-of-the-art in all datasets.
arXiv Detail & Related papers (2022-09-03T00:16:39Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - E-ADDA: Unsupervised Adversarial Domain Adaptation Enhanced by a New
Mahalanobis Distance Loss for Smart Computing [25.510639595356597]
In smart computing, the labels of training samples for a specific task are not always abundant.
We propose a novel UDA algorithm, textitE-ADDA, which uses both a novel variation of the Mahalanobis distance loss and an out-of-distribution detection subroutine.
In the acoustic modality, E-ADDA outperforms several state-of-the-art UDA algorithms by up to 29.8%, measured in the f1 score.
In the computer vision modality, the evaluation results suggest that we achieve new state-of-the-art performance on popular UDA
arXiv Detail & Related papers (2022-01-24T23:20:55Z) - Robustified Domain Adaptation [13.14535125302501]
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain.
The inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain.
We propose a novel Class-consistent Unsupervised Domain Adaptation (CURDA) framework for training robust UDA models.
arXiv Detail & Related papers (2020-11-18T22:21:54Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Unsupervised Domain Adaptation with Progressive Adaptation of Subspaces [26.080102941802107]
Unsupervised Domain Adaptation (UDA) aims to classify unlabeled target domain by transferring knowledge from labeled source domain with domain shift.
We propose a novel UDA method named Progressive Adaptation of Subspaces approach (PAS) in which we utilize such an intuition to gradually obtain reliable pseudo labels.
Our thorough evaluation demonstrates that PAS is not only effective for common UDA, but also outperforms the state-of-the arts for more challenging Partial Domain Adaptation (PDA) situation.
arXiv Detail & Related papers (2020-09-01T15:40:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.