Data Augmentation with norm-VAE for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2012.00848v1
- Date: Tue, 1 Dec 2020 21:41:08 GMT
- Title: Data Augmentation with norm-VAE for Unsupervised Domain Adaptation
- Authors: Qian Wang, Fanlin Meng, Toby P. Breckon
- Abstract summary: We learn a unified classifier for both domains within a high-dimensional homogeneous feature space without explicit domain adaptation.
We employ the effective Selective Pseudo-Labelling (SPL) techniques to take advantage of the unlabelled samples in the target domain.
We propose a novel generative model norm-VAE to generate synthetic features for the target domain as a data augmentation strategy.
- Score: 26.889303784575805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We address the Unsupervised Domain Adaptation (UDA) problem in image
classification from a new perspective. In contrast to most existing works which
either align the data distributions or learn domain-invariant features, we
directly learn a unified classifier for both domains within a high-dimensional
homogeneous feature space without explicit domain adaptation. To this end, we
employ the effective Selective Pseudo-Labelling (SPL) techniques to take
advantage of the unlabelled samples in the target domain. Surprisingly, data
distribution discrepancy across the source and target domains can be well
handled by a computationally simple classifier (e.g., a shallow Multi-Layer
Perceptron) trained in the original feature space. Besides, we propose a novel
generative model norm-VAE to generate synthetic features for the target domain
as a data augmentation strategy to enhance classifier training. Experimental
results on several benchmark datasets demonstrate the pseudo-labelling strategy
itself can lead to comparable performance to many state-of-the-art methods
whilst the use of norm-VAE for feature augmentation can further improve the
performance in most cases. As a result, our proposed methods (i.e. naive-SPL
and norm-VAE-SPL) can achieve new state-of-the-art performance with the average
accuracy of 93.4% and 90.4% on Office-Caltech and ImageCLEF-DA datasets, and
comparable performance on Digits, Office31 and Office-Home datasets with the
average accuracy of 97.2%, 87.6% and 67.9% respectively.
Related papers
- Style Adaptation for Domain-adaptive Semantic Segmentation [2.1365683052370046]
Domain discrepancy leads to a significant decrease in the performance of general network models trained on the source domain data when applied to the target domain.
We introduce a straightforward approach to mitigate the domain discrepancy, which necessitates no additional parameter calculations and seamlessly integrates with self-training-based UDA methods.
Our proposed method attains a noteworthy UDA performance of 76.93 mIoU on the GTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage points over the previous state-of-the-art results.
arXiv Detail & Related papers (2024-04-25T02:51:55Z) - MADAv2: Advanced Multi-Anchor Based Active Domain Adaptation
Segmentation [98.09845149258972]
We introduce active sample selection to assist domain adaptation regarding the semantic segmentation task.
With only a little workload to manually annotate these samples, the distortion of the target-domain distribution can be effectively alleviated.
A powerful semi-supervised domain adaptation strategy is proposed to alleviate the long-tail distribution problem.
arXiv Detail & Related papers (2023-01-18T07:55:22Z) - One-Shot Domain Adaptive and Generalizable Semantic Segmentation with
Class-Aware Cross-Domain Transformers [96.51828911883456]
Unsupervised sim-to-real domain adaptation (UDA) for semantic segmentation aims to improve the real-world test performance of a model trained on simulated data.
Traditional UDA often assumes that there are abundant unlabeled real-world data samples available during training for the adaptation.
We explore the one-shot unsupervised sim-to-real domain adaptation (OSUDA) and generalization problem, where only one real-world data sample is available.
arXiv Detail & Related papers (2022-12-14T15:54:15Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.