Multi-step domain adaptation by adversarial attack to $\mathcal{H}
\Delta \mathcal{H}$-divergence
- URL: http://arxiv.org/abs/2207.08948v1
- Date: Mon, 18 Jul 2022 21:24:05 GMT
- Title: Multi-step domain adaptation by adversarial attack to $\mathcal{H}
\Delta \mathcal{H}$-divergence
- Authors: Arip Asadulaev, Alexander Panfilov, Andrey Filchenkov
- Abstract summary: In unsupervised domain adaptation settings, we demonstrate that replacing the source domain with adversarial examples to improve source accuracy on the target domain.
We conducted a range of experiments and achieved improvement in accuracy on Digits and Office-Home datasets.
- Score: 73.89838982331453
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples are transferable between different models. In our paper,
we propose to use this property for multi-step domain adaptation. In
unsupervised domain adaptation settings, we demonstrate that replacing the
source domain with adversarial examples to $\mathcal{H} \Delta
\mathcal{H}$-divergence can improve source classifier accuracy on the target
domain. Our method can be connected to most domain adaptation techniques. We
conducted a range of experiments and achieved improvement in accuracy on Digits
and Office-Home datasets.
Related papers
- Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Improving Domain Generalization with Domain Relations [77.63345406973097]
This paper focuses on domain shifts, which occur when the model is applied to new domains that are different from the ones it was trained on.
We propose a new approach called D$3$G to learn domain-specific models.
Our results show that D$3$G consistently outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-02-06T08:11:16Z) - Domain Adaptation for Time-Series Classification to Mitigate Covariate
Shift [3.071136270246468]
This paper proposes a novel supervised domain adaptation based on two steps.
First, we search for an optimal class-dependent transformation from the source to the target domain from a few samples.
Second, we use embedding similarity techniques to select the corresponding transformation at inference.
arXiv Detail & Related papers (2022-04-07T10:27:14Z) - Multiple-Source Domain Adaptation via Coordinated Domain Encoders and
Paired Classifiers [1.52292571922932]
We present a novel model for text classification under domain shift.
It exploits the update representations to dynamically integrate domain encoders.
It also employs a probabilistic model to infer the error rate in the target domain.
arXiv Detail & Related papers (2022-01-28T00:50:01Z) - Domain-shift adaptation via linear transformations [11.541238742226199]
A predictor, $f_A, learned with data from a source domain (A) might not be accurate on a target domain (B) when their distributions are different.
We propose an approach to project the source and target domains into a lower-dimensional, common space.
We show the effectiveness of our approach in simulated data and in binary digit classification tasks, obtaining improvements up to 48% accuracy when correcting for the domain shift in the data.
arXiv Detail & Related papers (2022-01-14T02:49:03Z) - T-SVDNet: Exploring High-Order Prototypical Correlations for
Multi-Source Domain Adaptation [41.356774580308986]
We propose a novel approach named T-SVDNet to address the task of Multi-source Domain Adaptation.
High-order correlations among multiple domains and categories are fully explored so as to better bridge the domain gap.
To avoid negative transfer brought by noisy source data, we propose a novel uncertainty-aware weighting strategy.
arXiv Detail & Related papers (2021-07-30T06:33:05Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - FixBi: Bridging Domain Spaces for Unsupervised Domain Adaptation [26.929772844572213]
We introduce a fixed ratio-based mixup to augment multiple intermediate domains between the source and target domain.
We train the source-dominant model and the target-dominant model that have complementary characteristics.
Through our proposed methods, the models gradually transfer domain knowledge from the source to the target domain.
arXiv Detail & Related papers (2020-11-18T11:58:19Z) - Multi-Source Domain Adaptation for Text Classification via
DistanceNet-Bandits [101.68525259222164]
We present a study of various distance-based measures in the context of NLP tasks, that characterize the dissimilarity between domains based on sample estimates.
We develop a DistanceNet model which uses these distance measures as an additional loss function to be minimized jointly with the task's loss function.
We extend this model to a novel DistanceNet-Bandit model, which employs a multi-armed bandit controller to dynamically switch between multiple source domains.
arXiv Detail & Related papers (2020-01-13T15:53:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.