P2ExNet: Patch-based Prototype Explanation Network
- URL: http://arxiv.org/abs/2005.02006v2
- Date: Thu, 19 Nov 2020 13:02:36 GMT
- Title: P2ExNet: Patch-based Prototype Explanation Network
- Authors: Dominique Mercier, Andreas Dengel, Sheraz Ahmed
- Abstract summary: We propose a novel interpretable network scheme, designed to inherently use an explainable reasoning process inspired by the human cognition.
P2ExNet reaches comparable performance when compared to its counterparts while inherently providing understandable and traceable decisions.
- Score: 5.557646286040063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods have shown great success in several domains as they
process a large amount of data efficiently, capable of solving complex
classification, forecast, segmentation, and other tasks. However, they come
with the inherent drawback of inexplicability limiting their applicability and
trustworthiness. Although there exists work addressing this perspective, most
of the existing approaches are limited to the image modality due to the
intuitive and prominent concepts. Conversely, the concepts in the time-series
domain are more complex and non-comprehensive but these and an explanation for
the network decision are pivotal in critical domains like medical, financial,
or industry. Addressing the need for an explainable approach, we propose a
novel interpretable network scheme, designed to inherently use an explainable
reasoning process inspired by the human cognition without the need of
additional post-hoc explainability methods. Therefore, class-specific patches
are used as they cover local concepts relevant to the classification to reveal
similarities with samples of the same class. In addition, we introduce a novel
loss concerning interpretability and accuracy that constraints P2ExNet to
provide viable explanations of the data including relevant patches, their
position, class similarities, and comparison methods without compromising
accuracy. Analysis of the results on eight publicly available time-series
datasets reveals that P2ExNet reaches comparable performance when compared to
its counterparts while inherently providing understandable and traceable
decisions.
Related papers
- Improving Network Interpretability via Explanation Consistency Evaluation [56.14036428778861]
We propose a framework that acquires more explainable activation heatmaps and simultaneously increase the model performance.
Specifically, our framework introduces a new metric, i.e., explanation consistency, to reweight the training samples adaptively in model learning.
Our framework then promotes the model learning by paying closer attention to those training samples with a high difference in explanations.
arXiv Detail & Related papers (2024-08-08T17:20:08Z) - InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts [31.738009841932374]
Interpretability for neural networks is a trade-off between three key requirements.
We present InterpretCC, a family of interpretable-by-design neural networks that guarantee human-centric interpretability.
arXiv Detail & Related papers (2024-02-05T11:55:50Z) - DARE: Towards Robust Text Explanations in Biomedical and Healthcare
Applications [54.93807822347193]
We show how to adapt attribution robustness estimation methods to a given domain, so as to take into account domain-specific plausibility.
Next, we provide two methods, adversarial training and FAR training, to mitigate the brittleness characterized by DARE.
Finally, we empirically validate our methods with extensive experiments on three established biomedical benchmarks.
arXiv Detail & Related papers (2023-07-05T08:11:40Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Evaluation and Improvement of Interpretability for Self-Explainable
Part-Prototype Networks [43.821442711496154]
Part-prototype networks have attracted broad research interest for their intrinsic interpretability and comparable accuracy to non-interpretable counterparts.
We make the first attempt to quantitatively and objectively evaluate the interpretability of the part-prototype networks.
We propose an elaborated part-prototype network with a shallow-deep feature alignment module and a score aggregation module to improve the interpretability of prototypes.
arXiv Detail & Related papers (2022-12-12T14:59:11Z) - On Generalizing Beyond Domains in Cross-Domain Continual Learning [91.56748415975683]
Deep neural networks often suffer from catastrophic forgetting of previously learned knowledge after learning a new task.
Our proposed approach learns new tasks under domain shift with accuracy boosts up to 10% on challenging datasets such as DomainNet and OfficeHome.
arXiv Detail & Related papers (2022-03-08T09:57:48Z) - Time to Focus: A Comprehensive Benchmark Using Time Series Attribution
Methods [4.9449660544238085]
The paper focuses on time series analysis and benchmark several state-of-the-art attribution methods.
The presented experiments involve gradient-based and perturbation-based attribution methods.
The findings accentuate that choosing the best-suited attribution method is strongly correlated with the desired use case.
arXiv Detail & Related papers (2022-02-08T10:06:13Z) - Coarse to Fine: Domain Adaptive Crowd Counting via Adversarial Scoring
Network [58.05473757538834]
This paper proposes a novel adversarial scoring network (ASNet) to bridge the gap across domains from coarse to fine granularity.
Three sets of migration experiments show that the proposed methods achieve state-of-the-art counting performance.
arXiv Detail & Related papers (2021-07-27T14:47:24Z) - Temporal graph-based approach for behavioural entity classification [0.0]
In this study, a two phased approach for exploiting the potential of graph structures in the cybersecurity domain is presented.
The main idea is to convert a network classification problem into a graph-based behavioural one.
We extract these graph structures that can represent the evolution of both normal and attack entities.
Three clustering techniques are applied to the normal entities in order to aggregate similar behaviours, mitigate the imbalance problem and reduce noisy data.
arXiv Detail & Related papers (2021-05-11T06:13:58Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.