Visual Explanations with Attributions and Counterfactuals on Time Series
Classification
- URL: http://arxiv.org/abs/2307.08494v1
- Date: Fri, 14 Jul 2023 10:01:30 GMT
- Title: Visual Explanations with Attributions and Counterfactuals on Time Series
Classification
- Authors: Udo Schlegel, Daniela Oelke, Daniel A. Keim, Mennatallah El-Assady
- Abstract summary: We propose a visual analytics workflow to support seamless transitions between global and local explanations.
To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset.
To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification.
- Score: 15.51135925107216
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rising necessity of explainable artificial intelligence (XAI), we
see an increase in task-dependent XAI methods on varying abstraction levels.
XAI techniques on a global level explain model behavior and on a local level
explain sample predictions. We propose a visual analytics workflow to support
seamless transitions between global and local explanations, focusing on
attributions and counterfactuals on time series classification. In particular,
we adapt local XAI techniques (attributions) that are developed for traditional
datasets (images, text) to analyze time series classification, a data type that
is typically less intelligible to humans. To generate a global overview, we
apply local attribution methods to the data, creating explanations for the
whole dataset. These explanations are projected onto two dimensions, depicting
model behavior trends, strategies, and decision boundaries. To further inspect
the model decision-making as well as potential data errors, a what-if analysis
facilitates hypothesis generation and verification on both the global and local
levels. We constantly collected and incorporated expert user feedback, as well
as insights based on their domain knowledge, resulting in a tailored analysis
workflow and system that tightly integrates time series transformations into
explanations. Lastly, we present three use cases, verifying that our technique
enables users to (1)~explore data transformations and feature relevance,
(2)~identify model behavior and decision boundaries, as well as, (3)~the reason
for misclassifications.
Related papers
- GM-DF: Generalized Multi-Scenario Deepfake Detection [49.072106087564144]
Existing face forgery detection usually follows the paradigm of training models in a single domain.
In this paper, we elaborately investigate the generalization capacity of deepfake detection models when jointly trained on multiple face forgery detection datasets.
arXiv Detail & Related papers (2024-06-28T17:42:08Z) - Prospector Heads: Generalized Feature Attribution for Large Models & Data [82.02696069543454]
We introduce prospector heads, an efficient and interpretable alternative to explanation-based attribution methods.
We demonstrate how prospector heads enable improved interpretation and discovery of class-specific patterns in input data.
arXiv Detail & Related papers (2024-02-18T23:01:28Z) - Towards the Visualization of Aggregated Class Activation Maps to Analyse
the Global Contribution of Class Features [0.47248250311484113]
Class Activation Maps (CAMs) visualizes the importance of each feature of a data sample contributing to the classification.
We aggregate CAMs from multiple samples to show a global explanation of the classification for semantically structured data.
Our approach allows an analyst to detect important features of high-dimensional data and derive adjustments to the AI model based on our global explanation visualization.
arXiv Detail & Related papers (2023-07-29T11:13:11Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - Decoupling Local and Global Representations of Time Series [38.73548222141307]
We propose a novel generative approach for learning representations for the global and local factors of variation in time series.
In experiments, we demonstrate successful recovery of the true local and global variability factors on simulated data.
We believe that the proposed way of defining representations is beneficial for data modelling and yields better insights into the complexity of real-world data.
arXiv Detail & Related papers (2022-02-04T17:46:04Z) - Cross-Domain Generalization and Knowledge Transfer in Transformers
Trained on Legal Data [0.0]
We analyze the ability of pre-trained language models to transfer knowledge among datasets annotated with different type systems.
Prediction of the rhetorical role a sentence plays in a case decision is an important and often studied task in AI & Law.
arXiv Detail & Related papers (2021-12-15T04:23:14Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - An Information-theoretic Approach to Distribution Shifts [9.475039534437332]
Safely deploying machine learning models to the real world is often a challenging process.
Models trained with data obtained from a specific geographic location tend to fail when queried with data obtained elsewhere.
neural networks that are fit to a subset of the population might carry some selection bias into their decision process.
arXiv Detail & Related papers (2021-06-07T16:44:21Z) - Information-theoretic Evolution of Model Agnostic Global Explanations [10.921146104622972]
We present a novel model-agnostic approach that derives rules to globally explain the behavior of classification models trained on numerical and/or categorical data.
Our approach has been deployed in a leading digital marketing suite of products.
arXiv Detail & Related papers (2021-05-14T16:52:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.