Revisiting Deep Subspace Alignment for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2201.01806v1
- Date: Wed, 5 Jan 2022 20:16:38 GMT
- Title: Revisiting Deep Subspace Alignment for Unsupervised Domain Adaptation
- Authors: Kowshik Thopalli, Jayaraman J Thiagarajan, Rushil Anirudh, and Pavan K
Turaga
- Abstract summary: Unsupervised domain adaptation (UDA) aims to transfer and adapt knowledge from a labeled source domain to an unlabeled target domain.
Traditionally, subspace-based methods form an important class of solutions to this problem.
This paper revisits the use of subspace alignment for UDA and proposes a novel adaptation algorithm that consistently leads to improved generalization.
- Score: 42.16718847243166
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Unsupervised domain adaptation (UDA) aims to transfer and adapt knowledge
from a labeled source domain to an unlabeled target domain. Traditionally,
subspace-based methods form an important class of solutions to this problem.
Despite their mathematical elegance and tractability, these methods are often
found to be ineffective at producing domain-invariant features with complex,
real-world datasets. Motivated by the recent advances in representation
learning with deep networks, this paper revisits the use of subspace alignment
for UDA and proposes a novel adaptation algorithm that consistently leads to
improved generalization. In contrast to existing adversarial training-based DA
methods, our approach isolates feature learning and distribution alignment
steps, and utilizes a primary-auxiliary optimization strategy to effectively
balance the objectives of domain invariance and model fidelity. While providing
a significant reduction in target data and computational requirements, our
subspace-based DA performs competitively and sometimes even outperforms
state-of-the-art approaches on several standard UDA benchmarks. Furthermore,
subspace alignment leads to intrinsically well-regularized models that
demonstrate strong generalization even in the challenging partial DA setting.
Finally, the design of our UDA framework inherently supports progressive
adaptation to new target domains at test-time, without requiring retraining of
the model from scratch. In summary, powered by powerful feature learners and an
effective optimization strategy, we establish subspace-based DA as a highly
effective approach for visual recognition.
Related papers
- Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - DSD-DA: Distillation-based Source Debiasing for Domain Adaptive Object Detection [37.01880023537362]
We propose a novel Distillation-based Source Debiasing (DSD) framework for Domain Adaptive Object Detection (DAOD)
This framework distills domain-agnostic knowledge from a pre-trained teacher model, improving the detector's performance on both domains.
We also present a Domain-aware Consistency Enhancing (DCE) strategy, in which these information are formulated into a new localization representation.
arXiv Detail & Related papers (2023-11-17T10:26:26Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain [11.764601181046496]
This paper presents an unsupervised domain adaptation (UDA) method for predicting unlabeled target domain data.
We propose the Adversarial self-superVised domain Adaptation network for the TARget domain (AVATAR) algorithm.
Our proposed model significantly outperforms state-of-the-art methods on three UDA benchmarks.
arXiv Detail & Related papers (2023-04-28T20:31:56Z) - Self-training through Classifier Disagreement for Cross-Domain Opinion
Target Extraction [62.41511766918932]
Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental task in opinion mining.
Recent work focus on cross-domain OTE, which is typically encountered in real-world scenarios.
We propose a new SSL approach that opts for selecting target samples whose model output from a domain-specific teacher and student network disagrees on the unlabelled target data.
arXiv Detail & Related papers (2023-02-28T16:31:17Z) - Increasing Model Generalizability for Unsupervised Domain Adaptation [12.013345715187285]
We show that increasing the interclass margins in the embedding space can help to develop a UDA algorithm with improved performance.
We demonstrate that using our approach leads to improved model generalizability on four standard benchmark UDA image classification datasets.
arXiv Detail & Related papers (2022-09-29T09:08:04Z) - Domain Adaptation with Adversarial Training on Penultimate Activations [82.9977759320565]
Enhancing model prediction confidence on unlabeled target data is an important objective in Unsupervised Domain Adaptation (UDA)
We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features.
arXiv Detail & Related papers (2022-08-26T19:50:46Z) - Dynamic Domain Adaptation for Efficient Inference [12.713628738434881]
Domain adaptation (DA) enables knowledge transfer from a labeled source domain to an unlabeled target domain.
Most prior DA approaches leverage complicated and powerful deep neural networks to improve the adaptation capacity.
We propose a dynamic domain adaptation (DDA) framework, which can simultaneously achieve efficient target inference in low-resource scenarios.
arXiv Detail & Related papers (2021-03-26T08:53:16Z) - Robustified Domain Adaptation [13.14535125302501]
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain.
The inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain.
We propose a novel Class-consistent Unsupervised Domain Adaptation (CURDA) framework for training robust UDA models.
arXiv Detail & Related papers (2020-11-18T22:21:54Z) - Class-Incremental Domain Adaptation [56.72064953133832]
We introduce a practical Domain Adaptation (DA) paradigm called Class-Incremental Domain Adaptation (CIDA)
Existing DA methods tackle domain-shift but are unsuitable for learning novel target-domain classes.
Our approach yields superior performance as compared to both DA and CI methods in the CIDA paradigm.
arXiv Detail & Related papers (2020-08-04T07:55:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.