Software for Dataset-wide XAI: From Local Explanations to Global
Insights with Zennit, CoRelAy, and ViRelAy
- URL: http://arxiv.org/abs/2106.13200v1
- Date: Thu, 24 Jun 2021 17:27:22 GMT
- Title: Software for Dataset-wide XAI: From Local Explanations to Global
Insights with Zennit, CoRelAy, and ViRelAy
- Authors: Christopher J. Anders, David Neumann, Wojciech Samek, Klaus-Robert
M\"uller, Sebastian Lapuschkin
- Abstract summary: We introduce Zennit, CoRelAy, and ViRelAy to explore model reasoning using attribution approaches and beyond.
Zennit is a highly customizable and intuitive attribution framework implementing LRP and related approaches in PyTorch.
CoRelAy is a framework to easily and quickly construct quantitative analysis pipelines for dataset-wide analyses of explanations.
ViRelAy is a web-application to interactively explore data, attributions, and analysis results.
- Score: 14.513962521609233
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep Neural Networks (DNNs) are known to be strong predictors, but their
prediction strategies can rarely be understood. With recent advances in
Explainable Artificial Intelligence, approaches are available to explore the
reasoning behind those complex models' predictions. One class of approaches are
post-hoc attribution methods, among which Layer-wise Relevance Propagation
(LRP) shows high performance. However, the attempt at understanding a DNN's
reasoning often stops at the attributions obtained for individual samples in
input space, leaving the potential for deeper quantitative analyses untouched.
As a manual analysis without the right tools is often unnecessarily labor
intensive, we introduce three software packages targeted at scientists to
explore model reasoning using attribution approaches and beyond: (1) Zennit - a
highly customizable and intuitive attribution framework implementing LRP and
related approaches in PyTorch, (2) CoRelAy - a framework to easily and quickly
construct quantitative analysis pipelines for dataset-wide analyses of
explanations, and (3) ViRelAy - a web-application to interactively explore
data, attributions, and analysis results.
Related papers
- Multimodal Behavioral Patterns Analysis with Eye-Tracking and LLM-Based Reasoning [12.054910727620154]
Eye-tracking data reveals valuable insights into users' cognitive states but is difficult to analyze due to its structured, non-linguistic nature.<n>This paper presents a multimodal human-AI collaborative framework designed to enhance cognitive pattern extraction from eye-tracking signals.
arXiv Detail & Related papers (2025-07-24T09:49:53Z) - Learning Efficient and Generalizable Graph Retriever for Knowledge-Graph Question Answering [75.12322966980003]
Large Language Models (LLMs) have shown strong inductive reasoning ability across various domains.<n>Most existing RAG pipelines rely on unstructured text, limiting interpretability and structured reasoning.<n>Recent studies have explored integrating knowledge graphs with LLMs for knowledge graph question answering.<n>We propose RAPL, a novel framework for efficient and effective graph retrieval in KGQA.
arXiv Detail & Related papers (2025-06-11T12:03:52Z) - B-XAIC Dataset: Benchmarking Explainable AI for Graph Neural Networks Using Chemical Data [4.945980414437814]
B-XAIC is a novel benchmark constructed from real-world molecular data and diverse tasks with known ground-truth rationales for assigned labels.<n>This benchmark provides a valuable resource for gaining deeper insights into the faithfulness of XAI, facilitating the development of more reliable and interpretable models.
arXiv Detail & Related papers (2025-05-28T11:40:48Z) - Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - SynthTree: Co-supervised Local Model Synthesis for Explainable Prediction [15.832975722301011]
We propose a novel method to enhance explainability with minimal accuracy loss.
We have developed novel methods for estimating nodes by leveraging AI techniques.
Our findings highlight the critical role that statistical methodologies can play in advancing explainable AI.
arXiv Detail & Related papers (2024-06-16T14:43:01Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - DeepSeer: Interactive RNN Explanation and Debugging via State
Abstraction [10.110976560799612]
Recurrent Neural Networks (RNNs) have been widely used in Natural Language Processing (NLP) tasks.
DeepSeer is an interactive system that provides both global and local explanations of RNN behavior.
arXiv Detail & Related papers (2023-03-02T21:08:17Z) - On the Generalization of PINNs outside the training domain and the
Hyperparameters influencing it [1.3927943269211593]
PINNs are Neural Network architectures trained to emulate solutions of differential equations without the necessity of solution data.
We perform an empirical analysis of the behavior of PINN predictions outside their training domain.
We assess whether the algorithmic setup of PINNs can influence their potential for generalization and showcase the respective effect on the prediction.
arXiv Detail & Related papers (2023-02-15T09:51:56Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Multi-Step Deductive Reasoning Over Natural Language: An Empirical Study on Out-of-Distribution Generalisation [13.887376297334258]
We introduce IMA-GloVe-GA, an iterative neural inference network for multi-step reasoning expressed in natural language.
In our model, reasoning is performed using an iterative memory neural network based on RNN with a gated attention mechanism.
arXiv Detail & Related papers (2022-07-28T10:44:46Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - DeepNNK: Explaining deep models and their generalization using polytope
interpolation [42.16401154367232]
We take a step towards better understanding of neural networks by introducing a local polytopegenerative method.
The proposed Deep Non Negative Kernel regression (NNK) framework is nongenerative, theoretically simple and geometrically intuitive.
arXiv Detail & Related papers (2020-07-20T22:05:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.