E-ADDA: Unsupervised Adversarial Domain Adaptation Enhanced by a New
Mahalanobis Distance Loss for Smart Computing
- URL: http://arxiv.org/abs/2201.10001v5
- Date: Fri, 21 Apr 2023 15:53:46 GMT
- Title: E-ADDA: Unsupervised Adversarial Domain Adaptation Enhanced by a New
Mahalanobis Distance Loss for Smart Computing
- Authors: Ye Gao, Brian Baucom, Karen Rose, Kristina Gordon, Hongning Wang, John
Stankovic
- Abstract summary: In smart computing, the labels of training samples for a specific task are not always abundant.
We propose a novel UDA algorithm, textitE-ADDA, which uses both a novel variation of the Mahalanobis distance loss and an out-of-distribution detection subroutine.
In the acoustic modality, E-ADDA outperforms several state-of-the-art UDA algorithms by up to 29.8%, measured in the f1 score.
In the computer vision modality, the evaluation results suggest that we achieve new state-of-the-art performance on popular UDA
- Score: 25.510639595356597
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In smart computing, the labels of training samples for a specific task are
not always abundant. However, the labels of samples in a relevant but different
dataset are available. As a result, researchers have relied on unsupervised
domain adaptation to leverage the labels in a dataset (the source domain) to
perform better classification in a different, unlabeled dataset (target
domain). Existing non-generative adversarial solutions for UDA aim at achieving
domain confusion through adversarial training. The ideal scenario is that
perfect domain confusion is achieved, but this is not guaranteed to be true. To
further enforce domain confusion on top of the adversarial training, we propose
a novel UDA algorithm, \textit{E-ADDA}, which uses both a novel variation of
the Mahalanobis distance loss and an out-of-distribution detection subroutine.
The Mahalanobis distance loss minimizes the distribution-wise distance between
the encoded target samples and the distribution of the source domain, thus
enforcing additional domain confusion on top of adversarial training. Then, the
OOD subroutine further eliminates samples on which the domain confusion is
unsuccessful. We have performed extensive and comprehensive evaluations of
E-ADDA in the acoustic and computer vision modalities. In the acoustic
modality, E-ADDA outperforms several state-of-the-art UDA algorithms by up to
29.8%, measured in the f1 score. In the computer vision modality, the
evaluation results suggest that we achieve new state-of-the-art performance on
popular UDA benchmarks such as Office-31 and Office-Home, outperforming the
second best-performing algorithms by up to 17.9%.
Related papers
- Improving Domain Adaptation Through Class Aware Frequency Transformation [15.70058524548143]
Most of the Unsupervised Domain Adaptation (UDA) algorithms focus on reducing the global domain shift between labelled source and unlabelled target domains.
We propose a novel approach based on traditional image processing technique Class Aware Frequency Transformation (CAFT)
CAFT utilizes pseudo label based class consistent low-frequency swapping for improving the overall performance of the existing UDA algorithms.
arXiv Detail & Related papers (2024-07-28T18:16:41Z) - Make the U in UDA Matter: Invariant Consistency Learning for
Unsupervised Domain Adaptation [86.61336696914447]
We dub our approach "Invariant CONsistency learning" (ICON)
We propose to make the U in Unsupervised DA matter by giving equal status to the two domains.
ICON achieves the state-of-the-art performance on the classic UDA benchmarks: Office-Home and VisDA-2017, and outperforms all the conventional methods on the challenging WILDS 2.0 benchmark.
arXiv Detail & Related papers (2023-09-22T09:43:32Z) - AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain [11.764601181046496]
This paper presents an unsupervised domain adaptation (UDA) method for predicting unlabeled target domain data.
We propose the Adversarial self-superVised domain Adaptation network for the TARget domain (AVATAR) algorithm.
Our proposed model significantly outperforms state-of-the-art methods on three UDA benchmarks.
arXiv Detail & Related papers (2023-04-28T20:31:56Z) - Learning Feature Decomposition for Domain Adaptive Monocular Depth
Estimation [51.15061013818216]
Supervised approaches have led to great success with the advance of deep learning, but they rely on large quantities of ground-truth depth annotations.
Unsupervised domain adaptation (UDA) transfers knowledge from labeled source data to unlabeled target data, so as to relax the constraint of supervised learning.
We propose a novel UDA method for MDE, referred to as Learning Feature Decomposition for Adaptation (LFDA), which learns to decompose the feature space into content and style components.
arXiv Detail & Related papers (2022-07-30T08:05:35Z) - UMAD: Universal Model Adaptation under Domain and Category Shift [138.12678159620248]
Universal Model ADaptation (UMAD) framework handles both UDA scenarios without access to source data.
We develop an informative consistency score to help distinguish unknown samples from known samples.
Experiments on open-set and open-partial-set UDA scenarios demonstrate that UMAD exhibits comparable, if not superior, performance to state-of-the-art data-dependent methods.
arXiv Detail & Related papers (2021-12-16T01:22:59Z) - Unsupervised domain adaptation with non-stochastic missing data [0.6608945629704323]
We consider unsupervised domain adaptation (UDA) for classification problems in the presence of missing data in the unlabelled target domain.
Imputation is performed in a domain-invariant latent space and leverages indirect supervision from a complete source domain.
We show the benefits of jointly performing adaptation, classification and imputation on datasets.
arXiv Detail & Related papers (2021-09-16T06:37:07Z) - Understanding the Limits of Unsupervised Domain Adaptation via Data
Poisoning [66.80663779176979]
Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels.
We show the insufficiency of minimizing source domain error and marginal distribution mismatch for a guaranteed reduction in the target domain error.
Motivated from this, we propose novel data poisoning attacks to fool UDA methods into learning representations that produce large target domain errors.
arXiv Detail & Related papers (2021-07-08T15:51:14Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - UDALM: Unsupervised Domain Adaptation through Language Modeling [79.73916345178415]
We introduce UDALM, a fine-tuning procedure, using a mixed classification and Masked Language Model loss.
Our experiments show that performance of models trained with the mixed loss scales with the amount of available target data can be effectively used as a stopping criterion.
Our method is evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset, yielding $91.74%$ accuracy, which is an $1.11%$ absolute improvement over the state-of-versathe-art.
arXiv Detail & Related papers (2021-04-14T19:05:01Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift [11.873435088539459]
This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
arXiv Detail & Related papers (2020-01-22T06:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.