Learning Transferable Conceptual Prototypes for Interpretable
Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2310.08071v1
- Date: Thu, 12 Oct 2023 06:36:41 GMT
- Title: Learning Transferable Conceptual Prototypes for Interpretable
Unsupervised Domain Adaptation
- Authors: Junyu Gao, Xinhong Ma, Changsheng Xu
- Abstract summary: In this paper, we propose an inherently interpretable method, named Transferable Prototype Learning ( TCPL)
To achieve this goal, we design a hierarchically prototypical module that transfers categorical basic concepts from the source domain to the target domain and learns domain-shared prototypes for explaining the underlying reasoning process.
Comprehensive experiments show that the proposed method can not only provide effective and intuitive explanations but also outperform previous state-of-the-arts.
- Score: 79.22678026708134
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the great progress of unsupervised domain adaptation (UDA) with the
deep neural networks, current UDA models are opaque and cannot provide
promising explanations, limiting their applications in the scenarios that
require safe and controllable model decisions. At present, a surge of work
focuses on designing deep interpretable methods with adequate data annotations
and only a few methods consider the distributional shift problem. Most existing
interpretable UDA methods are post-hoc ones, which cannot facilitate the model
learning process for performance enhancement. In this paper, we propose an
inherently interpretable method, named Transferable Conceptual Prototype
Learning (TCPL), which could simultaneously interpret and improve the processes
of knowledge transfer and decision-making in UDA. To achieve this goal, we
design a hierarchically prototypical module that transfers categorical basic
concepts from the source domain to the target domain and learns domain-shared
prototypes for explaining the underlying reasoning process. With the learned
transferable prototypes, a self-predictive consistent pseudo-label strategy
that fuses confidence, predictions, and prototype information, is designed for
selecting suitable target samples for pseudo annotations and gradually
narrowing down the domain gap. Comprehensive experiments show that the proposed
method can not only provide effective and intuitive explanations but also
outperform previous state-of-the-arts.
Related papers
- Enhancing Domain Adaptation through Prompt Gradient Alignment [16.618313165111793]
We develop a line of works based on prompt learning to learn both domain-invariant and specific features.
We cast UDA as a multiple-objective optimization problem in which each objective is represented by a domain loss.
Our method consistently surpasses other prompt-based baselines by a large margin on different UDA benchmarks.
arXiv Detail & Related papers (2024-06-13T17:40:15Z) - Understanding the (Extra-)Ordinary: Validating Deep Model Decisions with Prototypical Concept-based Explanations [13.60538902487872]
We present a novel post-hoc concept-based XAI framework that conveys besides instance-wise (local) also class-wise (global) decision-making strategies via prototypes.
We demonstrate the effectiveness of our approach in identifying out-of-distribution samples, spurious model behavior and data quality issues across three datasets.
arXiv Detail & Related papers (2023-11-28T10:53:26Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Towards Source-free Domain Adaptive Semantic Segmentation via Importance-aware and Prototype-contrast Learning [26.544837987747766]
We propose an end-to-end source-free domain adaptation semantic segmentation method via Importance-Aware and Prototype-Contrast learning.
The proposed IAPC framework effectively extracts domain-invariant knowledge from the well-trained source model and learns domain-specific knowledge from the unlabeled target domain.
arXiv Detail & Related papers (2023-06-02T15:09:19Z) - Visualizing Transferred Knowledge: An Interpretive Model of Unsupervised
Domain Adaptation [70.85686267987744]
Unsupervised domain adaptation problems can transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose an interpretive model of unsupervised domain adaptation, as the first attempt to visually unveil the mystery of transferred knowledge.
Our method provides an intuitive explanation for the base model's predictions and unveils transfer knowledge by matching the image patches with the same semantics across both source and target domains.
arXiv Detail & Related papers (2023-03-04T03:02:12Z) - ProtoVAE: A Trustworthy Self-Explainable Prototypical Variational Model [18.537838366377915]
ProtoVAE is a variational autoencoder-based framework that learns class-specific prototypes in an end-to-end manner.
It enforces trustworthiness and diversity by regularizing the representation space and introducing an orthonormality constraint.
arXiv Detail & Related papers (2022-10-15T00:42:13Z) - A Curriculum-style Self-training Approach for Source-Free Semantic Segmentation [91.13472029666312]
We propose a curriculum-style self-training approach for source-free domain adaptive semantic segmentation.
Our method yields state-of-the-art performance on source-free semantic segmentation tasks for both synthetic-to-real and adverse conditions.
arXiv Detail & Related papers (2021-06-22T10:21:39Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.