Safe Self-Refinement for Transformer-based Domain Adaptation
- URL: http://arxiv.org/abs/2204.07683v1
- Date: Sat, 16 Apr 2022 00:15:46 GMT
- Title: Safe Self-Refinement for Transformer-based Domain Adaptation
- Authors: Tao Sun, Cheng Lu, Tianshuo Zhang, Haibin Ling
- Abstract summary: Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain.
It is a challenging problem especially when a large domain gap lies between the source and target domains.
We propose a novel solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation), which brings improvement from two aspects.
- Score: 73.8480218879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source
domain to solve tasks on a related unlabeled target domain. It is a challenging
problem especially when a large domain gap lies between the source and target
domains. In this paper we propose a novel solution named SSRT (Safe
Self-Refinement for Transformer-based domain adaptation), which brings
improvement from two aspects. First, encouraged by the success of vision
transformers in various vision tasks, we arm SSRT with a transformer backbone.
We find that the combination of vision transformer with simple adversarial
adaptation surpasses best reported Convolutional Neural Network (CNN)-based
results on the challenging DomainNet benchmark, showing its strong transferable
feature representation. Second, to reduce the risk of model collapse and
improve the effectiveness of knowledge transfer between domains with large
gaps, we propose a Safe Self-Refinement strategy. Specifically, SSRT utilizes
predictions of perturbed target domain data to refine the model. Since the
model capacity of vision transformer is large and predictions in such
challenging tasks can be noisy, a safe training mechanism is designed to
adaptively adjust learning configuration. Extensive evaluations are conducted
on several widely tested UDA benchmarks and SSRT achieves consistently the best
performances, including 85.43% on Office-Home, 88.76% on VisDA-2017 and 45.2%
on DomainNet.
Related papers
- Contrastive Adversarial Training for Unsupervised Domain Adaptation [2.432037584128226]
Domain adversarial training has been successfully adopted for various domain adaptation tasks.
Large models make adversarial training being easily biased towards source domain and hardly adapted to target domain.
We propose contrastive adversarial training (CAT) approach that leverages the labeled source domain samples to reinforce and regulate the feature generation for target domain.
arXiv Detail & Related papers (2024-07-17T17:59:21Z) - Cross-Domain Few-Shot Learning via Adaptive Transformer Networks [16.289485655725013]
This paper proposes an adaptive transformer network (ADAPTER) for cross-domain few-shot learning.
ADAPTER is built upon the idea of bidirectional cross-attention to learn transferable features between the two domains.
arXiv Detail & Related papers (2024-01-25T07:05:42Z) - Strong-Weak Integrated Semi-supervision for Unsupervised Single and
Multi Target Domain Adaptation [6.472434306724611]
Unsupervised domain adaptation (UDA) focuses on transferring knowledge learned in the labeled source domain to the unlabeled target domain.
In this paper, we propose a novel strong-weak integrated semi-supervision (SWISS) learning strategy for image classification.
arXiv Detail & Related papers (2023-09-12T19:08:54Z) - Adversarial Bi-Regressor Network for Domain Adaptive Regression [52.5168835502987]
It is essential to learn a cross-domain regressor to mitigate the domain shift.
This paper proposes a novel method Adversarial Bi-Regressor Network (ABRNet) to seek more effective cross-domain regression model.
arXiv Detail & Related papers (2022-09-20T18:38:28Z) - Domain Adaptation for Object Detection using SE Adaptors and Center Loss [0.0]
We introduce an unsupervised domain adaptation method built on the foundation of faster-RCNN to prevent drops in performance due to domain shift.
We also introduce a family of adaptation layers that leverage the squeeze excitation mechanism called SE Adaptors to improve domain attention.
Finally, we incorporate a center loss in the instance and image level representations to improve the intra-class variance.
arXiv Detail & Related papers (2022-05-25T17:18:31Z) - Dispensed Transformer Network for Unsupervised Domain Adaptation [21.256375606219073]
A novel unsupervised domain adaptation (UDA) method named dispensed Transformer network (DTNet) is introduced in this paper.
Our proposed network achieves the best performance in comparison with several state-of-the-art techniques.
arXiv Detail & Related papers (2021-10-28T08:27:44Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation [54.61786380919243]
Unsupervised domain adaptation (UDA) aims to transfer the knowledge learnt from a labeled source domain to an unlabeled target domain.
Previous work is mainly built upon convolutional neural networks (CNNs) to learn domain-invariant representations.
With the recent exponential increase in applying Vision Transformer (ViT) to vision tasks, the capability of ViT in adapting cross-domain knowledge remains unexplored in the literature.
arXiv Detail & Related papers (2021-08-12T22:37:43Z) - Transformer-Based Source-Free Domain Adaptation [134.67078085569017]
We study the task of source-free domain adaptation (SFDA), where the source data are not available during target adaptation.
We propose a generic and effective framework based on Transformer, named TransDA, for learning a generalized model for SFDA.
arXiv Detail & Related papers (2021-05-28T23:06:26Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.