Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised
Domain Adaptation
- URL: http://arxiv.org/abs/2303.02302v1
- Date: Sat, 4 Mar 2023 03:02:12 GMT
- Title: Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised
Domain Adaptation
- Authors: Wenxiao Xiao, Zhengming Ding and Hongfu Liu
- Abstract summary: Unsupervised domain adaptation problems can transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an interpretive model of unsupervised domain adaptation, as the first attempt to visually unveil the mystery of transferred knowledge.
Our method provides an intuitive explanation for the base model's predictions and unveils transfer knowledge by matching the image patches with the same semantics across both source and target domains.
- Score: 70.85686267987744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many research efforts have been committed to unsupervised domain adaptation
(DA) problems that transfer knowledge learned from a labeled source domain to
an unlabeled target domain. Various DA methods have achieved remarkable results
recently in terms of predicting ability, which implies the effectiveness of the
aforementioned knowledge transferring. However, state-of-the-art methods rarely
probe deeper into the transferred mechanism, leaving the true essence of such
knowledge obscure. Recognizing its importance in the adaptation process, we
propose an interpretive model of unsupervised domain adaptation, as the first
attempt to visually unveil the mystery of transferred knowledge. Adapting the
existing concept of the prototype from visual image interpretation to the DA
task, our model similarly extracts shared information from the domain-invariant
representations as prototype vectors. Furthermore, we extend the current
prototype method with our novel prediction calibration and knowledge fidelity
preservation modules, to orientate the learned prototypes to the actual
transferred knowledge. By visualizing these prototypes, our method not only
provides an intuitive explanation for the base model's predictions but also
unveils transfer knowledge by matching the image patches with the same
semantics across both source and target domains. Comprehensive experiments and
in-depth explorations demonstrate the efficacy of our method in understanding
the transferred mechanism and its potential in downstream tasks including model
diagnosis.
Related papers
- Adapting to Distribution Shift by Visual Domain Prompt Generation [34.19066857066073]
We adapt a model at test-time using a few unlabeled data to address distribution shifts.
We build a knowledge bank to learn the transferable knowledge from source domains.
The proposed method outperforms previous work on 5 large-scale benchmarks including WILDS and DomainNet.
arXiv Detail & Related papers (2024-05-05T02:44:04Z) - Learning Causal Domain-Invariant Temporal Dynamics for Few-Shot Action Recognition [12.522600594024112]
Few-shot action recognition aims at quickly adapting a pre-trained model to novel data.
Key challenges include how to identify and leverage the transferable knowledge learned by the pre-trained model.
We propose CDTD, or Causal Domain-Invariant Temporal Dynamics for knowledge transfer.
arXiv Detail & Related papers (2024-02-20T04:09:58Z) - A Recent Survey of Heterogeneous Transfer Learning [15.830786437956144]
heterogeneous transfer learning has become a vital strategy in various tasks.
We offer an extensive review of over 60 HTL methods, covering both data-based and model-based approaches.
We explore applications in natural language processing, computer vision, multimodal learning, and biomedicine.
arXiv Detail & Related papers (2023-10-12T16:19:58Z) - Learning Transferable Conceptual Prototypes for Interpretable
Unsupervised Domain Adaptation [79.22678026708134]
In this paper, we propose an inherently interpretable method, named Transferable Prototype Learning ( TCPL)
To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process.
Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts.
arXiv Detail & Related papers (2023-10-12T06:36:41Z) - Variational Transfer Learning using Cross-Domain Latent Modulation [1.9662978733004601]
We introduce a novel cross-domain latent modulation mechanism to a variational autoencoder framework so as to achieve effective transfer learning.
Deep representations of the source and target domains are first extracted by a unified inference model and aligned by employing gradient reversal.
The learned deep representations are then cross-modulated to the latent encoding of the alternative domain, where consistency constraints are also applied.
arXiv Detail & Related papers (2022-05-31T03:47:08Z) - Adaptive Trajectory Prediction via Transferable GNN [74.09424229172781]
We propose a novel Transferable Graph Neural Network (T-GNN) framework, which jointly conducts trajectory prediction as well as domain alignment in a unified framework.
Specifically, a domain invariant GNN is proposed to explore the structural motion knowledge where the domain specific knowledge is reduced.
An attention-based adaptive knowledge learning module is further proposed to explore fine-grained individual-level feature representation for knowledge transfer.
arXiv Detail & Related papers (2022-03-09T21:08:47Z) - TraND: Transferable Neighborhood Discovery for Unsupervised Cross-domain
Gait Recognition [77.77786072373942]
This paper proposes a Transferable Neighborhood Discovery (TraND) framework to bridge the domain gap for unsupervised cross-domain gait recognition.
We design an end-to-end trainable approach to automatically discover the confident neighborhoods of unlabeled samples in the latent space.
Our method achieves state-of-the-art results on two public datasets, i.e., CASIA-B and OU-LP.
arXiv Detail & Related papers (2021-02-09T03:07:07Z) - Source Data-absent Unsupervised Domain Adaptation through Hypothesis
Transfer and Labeling Transfer [137.36099660616975]
Unsupervised adaptation adaptation (UDA) aims to transfer knowledge from a related but different well-labeled source domain to a new unlabeled target domain.
Most existing UDA methods require access to the source data, and thus are not applicable when the data are confidential and not shareable due to privacy concerns.
This paper aims to tackle a realistic setting with only a classification model available trained over, instead of accessing to the source data.
arXiv Detail & Related papers (2020-12-14T07:28:50Z) - Unsupervised Transfer Learning for Spatiotemporal Predictive Networks [90.67309545798224]
We study how to transfer knowledge from a zoo of unsupervisedly learned models towards another network.
Our motivation is that models are expected to understand complex dynamics from different sources.
Our approach yields significant improvements on three benchmarks fortemporal prediction, and benefits the target even from less relevant ones.
arXiv Detail & Related papers (2020-09-24T15:40:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.