Pre-Training Transformers for Domain Adaptation
- URL: http://arxiv.org/abs/2112.09965v1
- Date: Sat, 18 Dec 2021 16:52:48 GMT
- Title: Pre-Training Transformers for Domain Adaptation
- Authors: Burhan Ul Tayyab and Nicholas Chua
- Abstract summary: In this paper, we demonstrate our capability of capturing key attributes from source datasets and apply it to target datasets in a semi-supervised manner.
Our method was able to outperform current state-of-the-art (SoTA) techniques and was able to achieve 1st place on the ViSDA Domain Adaptation Challenge with ACC of 56.29% and AUROC of 69.79%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Visual Domain Adaptation Challenge 2021 called for unsupervised domain
adaptation methods that could improve the performance of models by transferring
the knowledge obtained from source datasets to out-of-distribution target
datasets. In this paper, we utilize BeiT [1] and demonstrate its capability of
capturing key attributes from source datasets and apply it to target datasets
in a semi-supervised manner. Our method was able to outperform current
state-of-the-art (SoTA) techniques and was able to achieve 1st place on the
ViSDA Domain Adaptation Challenge with ACC of 56.29% and AUROC of 69.79%.
Related papers
- Style Adaptation for Domain-adaptive Semantic Segmentation [2.1365683052370046]
Domain discrepancy leads to a significant decrease in the performance of general network models trained on the source domain data when applied to the target domain.
We introduce a straightforward approach to mitigate the domain discrepancy, which necessitates no additional parameter calculations and seamlessly integrates with self-training-based UDA methods.
Our proposed method attains a noteworthy UDA performance of 76.93 mIoU on the GTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage points over the previous state-of-the-art results.
arXiv Detail & Related papers (2024-04-25T02:51:55Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - MADAv2: Advanced Multi-Anchor Based Active Domain Adaptation
Segmentation [98.09845149258972]
We introduce active sample selection to assist domain adaptation regarding the semantic segmentation task.
With only a little workload to manually annotate these samples, the distortion of the target-domain distribution can be effectively alleviated.
A powerful semi-supervised domain adaptation strategy is proposed to alleviate the long-tail distribution problem.
arXiv Detail & Related papers (2023-01-18T07:55:22Z) - Source Domain Subset Sampling for Semi-Supervised Domain Adaptation in
Semantic Segmentation [8.588352155493453]
We propose source domain subset sampling (SDSS) as a new perspective of semi-supervised domain adaptation.
Our key assumption is that the entire source domain data may contain samples that are unhelpful for the adaptation.
The proposed method effectively subsamples full source data to generate a small-scale meaningful subset.
arXiv Detail & Related papers (2022-04-30T17:29:56Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - VisDA-2021 Competition Universal Domain Adaptation to Improve
Performance on Out-of-Distribution Data [64.91713686654805]
The Visual Domain Adaptation (VisDA) 2021 competition tests models' ability to adapt to novel test distributions.
We will evaluate adaptation to novel viewpoints, backgrounds, modalities and degradation in quality.
Performance will be measured using a rigorous protocol, comparing to state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-07-23T03:21:51Z) - Transformer-Based Source-Free Domain Adaptation [134.67078085569017]
We study the task of source-free domain adaptation (SFDA), where the source data are not available during target adaptation.
We propose a generic and effective framework based on Transformer, named TransDA, for learning a generalized model for SFDA.
arXiv Detail & Related papers (2021-05-28T23:06:26Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z) - Data Augmentation with norm-VAE for Unsupervised Domain Adaptation [26.889303784575805]
We learn a unified classifier for both domains within a high-dimensional homogeneous feature space without explicit domain adaptation.
We employ the effective Selective Pseudo-Labelling (SPL) techniques to take advantage of the unlabelled samples in the target domain.
We propose a novel generative model norm-VAE to generate synthetic features for the target domain as a data augmentation strategy.
arXiv Detail & Related papers (2020-12-01T21:41:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.