Explainable Deep Classification Models for Domain Generalization
- URL: http://arxiv.org/abs/2003.06498v1
- Date: Fri, 13 Mar 2020 22:22:15 GMT
- Title: Explainable Deep Classification Models for Domain Generalization
- Authors: Andrea Zunino, Sarah Adel Bargal, Riccardo Volpi, Mehrnoosh Sameki,
Jianming Zhang, Stan Sclaroff, Vittorio Murino, Kate Saenko
- Abstract summary: Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision.
Our training strategy enforces a periodic saliency-based feedback to encourage the model to focus on the image regions that directly correspond to the ground-truth object.
- Score: 94.43131722655617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventionally, AI models are thought to trade off explainability for lower
accuracy. We develop a training strategy that not only leads to a more
explainable AI system for object classification, but as a consequence, suffers
no perceptible accuracy degradation. Explanations are defined as regions of
visual evidence upon which a deep classification network makes a decision. This
is represented in the form of a saliency map conveying how much each pixel
contributed to the network's decision. Our training strategy enforces a
periodic saliency-based feedback to encourage the model to focus on the image
regions that directly correspond to the ground-truth object. We quantify
explainability using an automated metric, and using human judgement. We propose
explainability as a means for bridging the visual-semantic gap between
different domains where model explanations are used as a means of disentagling
domain specific information from otherwise relevant features. We demonstrate
that this leads to improved generalization to new domains without hindering
performance on the original domain.
Related papers
- Explainable Image Recognition via Enhanced Slot-attention Based Classifier [28.259040737540797]
We introduce ESCOUTER, a visually explainable classifier based on the modified slot attention mechanism.
ESCOUTER distinguishes itself by not only delivering high classification accuracy but also offering more transparent insights into the reasoning behind its decisions.
A novel loss function specifically for ESCOUTER is designed to fine-tune the model's behavior, enabling it to toggle between positive and negative explanations.
arXiv Detail & Related papers (2024-07-08T05:05:43Z) - CNN-based explanation ensembling for dataset, representation and explanations evaluation [1.1060425537315088]
We explore the potential of ensembling explanations generated by deep classification models using convolutional model.
Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model's behavior.
arXiv Detail & Related papers (2024-04-16T08:39:29Z) - Pulling Target to Source: A New Perspective on Domain Adaptive Semantic Segmentation [80.1412989006262]
Domain adaptive semantic segmentation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We propose T2S-DA, which we interpret as a form of pulling Target to Source for Domain Adaptation.
arXiv Detail & Related papers (2023-05-23T07:09:09Z) - Towards Generalization on Real Domain for Single Image Dehazing via
Meta-Learning [41.99615673136883]
Internal information learned from synthesized images is usually sub-optimal in real domains.
We present a domain generalization framework based on meta-learning to dig out representative internal properties of real hazy domains.
Our proposed method has superior generalization ability than the state-of-the-art competitors.
arXiv Detail & Related papers (2022-11-14T07:04:00Z) - A Style and Semantic Memory Mechanism for Domain Generalization [108.98041306507372]
Intra-domain style invariance is of pivotal importance in improving the efficiency of domain generalization.
We propose a novel "jury" mechanism, which is particularly effective in learning useful semantic feature commonalities among domains.
Our proposed framework surpasses the state-of-the-art methods by clear margins.
arXiv Detail & Related papers (2021-12-14T16:23:24Z) - Domain-Class Correlation Decomposition for Generalizable Person
Re-Identification [34.813965300584776]
In person re-identification, the domain and class are correlated.
We show that domain adversarial learning will lose certain information about class due to this domain-class correlation.
Our model outperforms the state-of-the-art methods on the large-scale domain generalization Re-ID benchmark.
arXiv Detail & Related papers (2021-06-29T09:45:03Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Boundary Attributions Provide Normal (Vector) Explanations [27.20904776964045]
Boundary Attribution (BA) is a new explanation method to address this question.
BA involves computing normal vectors of the local decision boundaries for the target input.
We prove two theorems for ReLU networks: BA of randomized smoothed networks or robustly trained networks is much closer to non-boundary attribution methods than that in standard networks.
arXiv Detail & Related papers (2021-03-20T22:36:39Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - Domain-aware Visual Bias Eliminating for Generalized Zero-Shot Learning [150.42959029611657]
Domain-aware Visual Bias Eliminating (DVBE) network constructs two complementary visual representations.
For unseen images, we automatically search an optimal semantic-visual alignment architecture.
arXiv Detail & Related papers (2020-03-30T08:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.