Feature Fusion Transferability Aware Transformer for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2411.07794v1
- Date: Sun, 10 Nov 2024 22:23:12 GMT
- Title: Feature Fusion Transferability Aware Transformer for Unsupervised Domain Adaptation
- Authors: Xiaowei Yu, Zhe Huang, Zao Zhang,
- Abstract summary: Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from labeled source domains to improve performance on unlabeled target domains.
Recent research has shown promise in applying Vision Transformers (ViTs) to this task.
We propose a novel Feature Fusion Transferability Aware Transformer (FFTAT) to enhance ViT performance in UDA tasks.
- Score: 1.9035011984138845
- License:
- Abstract: Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from labeled source domains to improve performance on the unlabeled target domains. While Convolutional Neural Networks (CNNs) have been dominant in previous UDA methods, recent research has shown promise in applying Vision Transformers (ViTs) to this task. In this study, we propose a novel Feature Fusion Transferability Aware Transformer (FFTAT) to enhance ViT performance in UDA tasks. Our method introduces two key innovations: First, we introduce a patch discriminator to evaluate the transferability of patches, generating a transferability matrix. We integrate this matrix into self-attention, directing the model to focus on transferable patches. Second, we propose a feature fusion technique to fuse embeddings in the latent space, enabling each embedding to incorporate information from all others, thereby improving generalization. These two components work in synergy to enhance feature representation learning. Extensive experiments on widely used benchmarks demonstrate that our method significantly improves UDA performance, achieving state-of-the-art (SOTA) results.
Related papers
- Vision Transformer-based Adversarial Domain Adaptation [5.611768906855499]
Vision transformer (ViT) has attracted tremendous attention since its emergence and has been widely used in various computer vision tasks.
In this paper, we fill this gap by employing the ViT as the feature extractor in adversarial domain adaptation.
We empirically demonstrate that ViT can be a plug-and-play component in adversarial domain adaptation.
arXiv Detail & Related papers (2024-04-24T11:41:28Z) - Improving Source-Free Target Adaptation with Vision Transformers
Leveraging Domain Representation Images [8.626222763097335]
Unsupervised Domain Adaptation (UDA) methods facilitate knowledge transfer from a labeled source domain to an unlabeled target domain.
This paper presents an innovative method to bolster ViT performance in source-free target adaptation, beginning with an evaluation of how key, query, and value elements affect ViT outcomes.
Domain Representation Images (DRIs) act as domain-specific markers, effortlessly merging with the training regimen.
arXiv Detail & Related papers (2023-11-21T13:26:13Z) - ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation [48.039156140237615]
A Continual Test-Time Adaptation task is proposed to adapt the pre-trained model to continually changing target domains.
We design a Visual Domain Adapter (ViDA) for CTTA, explicitly handling both domain-specific and domain-shared knowledge.
Our proposed method achieves state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-06-07T11:18:53Z) - Robust Representation Learning with Self-Distillation for Domain Generalization [2.0817769887373245]
We propose a novel domain generalization technique called Robust Representation Learning with Self-Distillation.
We observe an average accuracy improvement in the range of 1.2% to 2.3% over the state-of-the-art on three datasets.
arXiv Detail & Related papers (2023-02-14T07:39:37Z) - Unsupervised Domain Adaptation for Video Transformers in Action
Recognition [76.31442702219461]
We propose a simple and novel UDA approach for video action recognition.
Our approach builds a robust source model that better generalises to target domain.
We report results on two video action benchmarks recognition for UDA.
arXiv Detail & Related papers (2022-07-26T12:17:39Z) - Activating More Pixels in Image Super-Resolution Transformer [53.87533738125943]
Transformer-based methods have shown impressive performance in low-level vision tasks, such as image super-resolution.
We propose a novel Hybrid Attention Transformer (HAT) to activate more input pixels for better reconstruction.
Our overall method significantly outperforms the state-of-the-art methods by more than 1dB.
arXiv Detail & Related papers (2022-05-09T17:36:58Z) - Safe Self-Refinement for Transformer-based Domain Adaptation [73.8480218879]
Unsupervised Domain Adaptation (UDA) aims to leverage a label-rich source domain to solve tasks on a related unlabeled target domain.
It is a challenging problem especially when a large domain gap lies between the source and target domains.
We propose a novel solution named SSRT (Safe Self-Refinement for Transformer-based domain adaptation), which brings improvement from two aspects.
arXiv Detail & Related papers (2022-04-16T00:15:46Z) - Domain Adaptation via Bidirectional Cross-Attention Transformer [4.643871270374136]
We propose a Bidirectional Cross-Attention Transformer (BCAT) for Domain Adaptation (DA)
In BCAT, the attention mechanism can extract implicit source and target mix-up feature representations to narrow the domain discrepancy.
Experiments demonstrate that the proposed BCAT model achieves superior performance on four benchmark datasets.
arXiv Detail & Related papers (2022-01-15T16:49:56Z) - TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation [54.61786380919243]
Unsupervised domain adaptation (UDA) aims to transfer the knowledge learnt from a labeled source domain to an unlabeled target domain.
Previous work is mainly built upon convolutional neural networks (CNNs) to learn domain-invariant representations.
With the recent exponential increase in applying Vision Transformer (ViT) to vision tasks, the capability of ViT in adapting cross-domain knowledge remains unexplored in the literature.
arXiv Detail & Related papers (2021-08-12T22:37:43Z) - Transformer-Based Source-Free Domain Adaptation [134.67078085569017]
We study the task of source-free domain adaptation (SFDA), where the source data are not available during target adaptation.
We propose a generic and effective framework based on Transformer, named TransDA, for learning a generalized model for SFDA.
arXiv Detail & Related papers (2021-05-28T23:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.