Explaining Cross-Domain Recognition with Interpretable Deep Classifier
- URL: http://arxiv.org/abs/2211.08249v1
- Date: Tue, 15 Nov 2022 15:58:56 GMT
- Title: Explaining Cross-Domain Recognition with Interpretable Deep Classifier
- Authors: Yiheng Zhang and Ting Yao and Zhaofan Qiu and Tao Mei
- Abstract summary: Interpretable Deep (IDC) learns the nearest source samples of a target sample as evidence upon which the classifier makes the decision.
Our IDC leads to a more explainable model with almost no accuracy degradation and effectively calibrates classification for optimum reject options.
- Score: 100.63114424262234
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent advances in deep learning predominantly construct models in their
internal representations, and it is opaque to explain the rationale behind and
decisions to human users. Such explainability is especially essential for
domain adaptation, whose challenges require developing more adaptive models
across different domains. In this paper, we ask the question: how much each
sample in source domain contributes to the network's prediction on the samples
from target domain. To address this, we devise a novel Interpretable Deep
Classifier (IDC) that learns the nearest source samples of a target sample as
evidence upon which the classifier makes the decision. Technically, IDC
maintains a differentiable memory bank for each category and the memory slot
derives a form of key-value pair. The key records the features of
discriminative source samples and the value stores the corresponding
properties, e.g., representative scores of the features for describing the
category. IDC computes the loss between the output of IDC and the labels of
source samples to back-propagate to adjust the representative scores and update
the memory banks. Extensive experiments on Office-Home and VisDA-2017 datasets
demonstrate that our IDC leads to a more explainable model with almost no
accuracy degradation and effectively calibrates classification for optimum
reject options. More remarkably, when taking IDC as a prior interpreter,
capitalizing on 0.1% source training data selected by IDC still yields superior
results than that uses full training set on VisDA-2017 for unsupervised domain
adaptation.
Related papers
- Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Multiple-Source Domain Adaptation via Coordinated Domain Encoders and
Paired Classifiers [1.52292571922932]
We present a novel model for text classification under domain shift.
It exploits the update representations to dynamically integrate domain encoders.
It also employs a probabilistic model to infer the error rate in the target domain.
arXiv Detail & Related papers (2022-01-28T00:50:01Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.