On Fine-Tuned Deep Features for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2210.14083v1
- Date: Tue, 25 Oct 2022 15:07:04 GMT
- Title: On Fine-Tuned Deep Features for Unsupervised Domain Adaptation
- Authors: Qian Wang, Toby P. Breckon
- Abstract summary: We study the potential of combining fine-tuned features and feature transformation based UDA methods for improved domain adaptation performance.
Specifically, we integrate the prevalent progressive pseudo-labelling techniques into the fine-tuning framework to extract fine-tuned features.
Experiments with multiple deep models including ResNet-50/101 and DeiT-small/base are conducted to demonstrate the combination of fine-tuned features.
- Score: 23.18781318003242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior feature transformation based approaches to Unsupervised Domain
Adaptation (UDA) employ the deep features extracted by pre-trained deep models
without fine-tuning them on the specific source or target domain data for a
particular domain adaptation task. In contrast, end-to-end learning based
approaches optimise the pre-trained backbones and the customised adaptation
modules simultaneously to learn domain-invariant features for UDA. In this
work, we explore the potential of combining fine-tuned features and feature
transformation based UDA methods for improved domain adaptation performance.
Specifically, we integrate the prevalent progressive pseudo-labelling
techniques into the fine-tuning framework to extract fine-tuned features which
are subsequently used in a state-of-the-art feature transformation based domain
adaptation method SPL (Selective Pseudo-Labeling). Thorough experiments with
multiple deep models including ResNet-50/101 and DeiT-small/base are conducted
to demonstrate the combination of fine-tuned features and SPL can achieve
state-of-the-art performance on several benchmark datasets.
Related papers
- Enhancing Domain Adaptation through Prompt Gradient Alignment [16.618313165111793]
We develop a line of works based on prompt learning to learn both domain-invariant and specific features.
We cast UDA as a multiple-objective optimization problem in which each objective is represented by a domain loss.
Our method consistently surpasses other prompt-based baselines by a large margin on different UDA benchmarks.
arXiv Detail & Related papers (2024-06-13T17:40:15Z) - Progressive Classifier and Feature Extractor Adaptation for Unsupervised Domain Adaptation on Point Clouds [36.596096617244655]
Unsupervised domain adaptation (UDA) is a critical challenge in the field of point cloud analysis.
We propose a novel framework that deeply couples the classifier and feature extractor for 3D UDA.
Our PCFEA conducts 3D UDA from two distinct perspectives: macro and micro levels.
arXiv Detail & Related papers (2023-11-27T07:33:15Z) - Domain-Aware Fine-Tuning: Enhancing Neural Network Adaptability [4.671615537573023]
Domain-Aware Fine-Tuning (DAFT) is a novel approach that incorporates batch normalization conversion and the integration of linear probing and fine-tuning.
Our method significantly mitigates feature distortion and achieves improved model performance on both in-distribution and out-of-distribution datasets.
arXiv Detail & Related papers (2023-08-15T12:08:43Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - NormAUG: Normalization-guided Augmentation for Domain Generalization [60.159546669021346]
We propose a simple yet effective method called NormAUG (Normalization-guided Augmentation) for deep learning.
Our method introduces diverse information at the feature level and improves the generalization of the main path.
In the test stage, we leverage an ensemble strategy to combine the predictions from the auxiliary path of our model, further boosting performance.
arXiv Detail & Related papers (2023-07-25T13:35:45Z) - IDA: Informed Domain Adaptive Semantic Segmentation [51.12107564372869]
We propose an Domain Informed Adaptation (IDA) model, a self-training framework that mixes the data based on class-level segmentation performance.
In our IDA model, the class-level performance is tracked by an expected confidence score (ECS) and we then use a dynamic schedule to determine the mixing ratio for data in different domains.
Our proposed method is able to outperform the state-of-the-art UDA-SS method by a margin of 1.1 mIoU in the adaptation of GTA-V to Cityscapes and of 0.9 mIoU in the adaptation of SYNTHIA to City
arXiv Detail & Related papers (2023-03-05T18:16:34Z) - TDACNN: Target-domain-free Domain Adaptation Convolutional Neural
Network for Drift Compensation in Gas Sensors [6.451060076703026]
In this paper, deep learning based on a target-domain-free domain adaptation convolutional neural network (TDACNN) is proposed.
The main concept is that CNNs extract not only the domain-specific features of samples but also the domain-invariant features underlying both the source and target domains.
Experiments on two datasets drift under different settings demonstrate the superiority of TDACNN compared with several state-of-the-art methods.
arXiv Detail & Related papers (2021-10-14T16:30:17Z) - T-SVDNet: Exploring High-Order Prototypical Correlations for
Multi-Source Domain Adaptation [41.356774580308986]
We propose a novel approach named T-SVDNet to address the task of Multi-source Domain Adaptation.
High-order correlations among multiple domains and categories are fully explored so as to better bridge the domain gap.
To avoid negative transfer brought by noisy source data, we propose a novel uncertainty-aware weighting strategy.
arXiv Detail & Related papers (2021-07-30T06:33:05Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.