On the Consistency and Robustness of Saliency Explanations for Time
Series Classification
- URL: http://arxiv.org/abs/2309.01457v1
- Date: Mon, 4 Sep 2023 09:08:22 GMT
- Title: On the Consistency and Robustness of Saliency Explanations for Time
Series Classification
- Authors: Chiara Balestra, Bin Li, Emmanuel M\"uller
- Abstract summary: Saliency maps have been applied to interpret time series windows as images.
This paper extensively analyzes the consistency and robustness of saliency maps for time series features and temporal attribution.
- Score: 4.062872727927056
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpretable machine learning and explainable artificial intelligence have
become essential in many applications. The trade-off between interpretability
and model performance is the traitor to developing intrinsic and model-agnostic
interpretation methods. Although model explanation approaches have achieved
significant success in vision and natural language domains, explaining time
series remains challenging. The complex pattern in the feature domain, coupled
with the additional temporal dimension, hinders efficient interpretation.
Saliency maps have been applied to interpret time series windows as images.
However, they are not naturally designed for sequential data, thus suffering
various issues.
This paper extensively analyzes the consistency and robustness of saliency
maps for time series features and temporal attribution. Specifically, we
examine saliency explanations from both perturbation-based and gradient-based
explanation models in a time series classification task. Our experimental
results on five real-world datasets show that they all lack consistent and
robust performances to some extent. By drawing attention to the flawed saliency
explanation models, we motivate to develop consistent and robust explanations
for time series classification.
Related papers
- TimeGraphs: Graph-based Temporal Reasoning [64.18083371645956]
TimeGraphs is a novel approach that characterizes dynamic interactions as a hierarchical temporal graph.
Our approach models the interactions using a compact graph-based representation, enabling adaptive reasoning across diverse time scales.
We evaluate TimeGraphs on multiple datasets with complex, dynamic agent interactions, including a football simulator, the Resistance game, and the MOMA human activity dataset.
arXiv Detail & Related papers (2024-01-06T06:26:49Z) - Encoding Time-Series Explanations through Self-Supervised Model Behavior
Consistency [26.99599329431296]
We present TimeX, a time series consistency model for training explainers.
TimeX trains an interpretable surrogate to mimic the behavior of a pretrained time series model.
We evaluate TimeX on eight synthetic and real-world datasets and compare its performance against state-of-the-art interpretability methods.
arXiv Detail & Related papers (2023-06-03T13:25:26Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - Ripple: Concept-Based Interpretation for Raw Time Series Models in
Education [5.374524134699487]
Time series is the most prevalent form of input data for educational prediction tasks.
We propose an approach that utilizes irregular multivariate time series modeling with graph neural networks to achieve comparable or better accuracy.
We analyze these advances in the education domain, addressing the task of early student performance prediction.
arXiv Detail & Related papers (2022-12-02T12:26:00Z) - DyTed: Disentangled Representation Learning for Discrete-time Dynamic
Graph [59.583555454424]
We propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed.
We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively.
arXiv Detail & Related papers (2022-10-19T14:34:12Z) - Learning to Reconstruct Missing Data from Spatiotemporal Graphs with
Sparse Observations [11.486068333583216]
This paper tackles the problem of learning effective models to reconstruct missing data points.
We propose a class of attention-based architectures, that given a set of highly sparse observations, learn a representation for points in time and space.
Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies.
arXiv Detail & Related papers (2022-05-26T16:40:48Z) - Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs [65.18780403244178]
We propose a continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE)
Specifically, we first abstract multivariate time series into dynamic graphs with time-evolving node features and unknown graph structures.
Then, we design and solve a neural ODE to complement missing graph topologies and unify both spatial and temporal message passing.
arXiv Detail & Related papers (2022-02-17T02:17:31Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z) - TimeSHAP: Explaining Recurrent Models through Sequence Perturbations [3.1498833540989413]
Recurrent neural networks are a standard building block in numerous machine learning domains.
The complex decision-making in these models is seen as a black-box, creating a tension between accuracy and interpretability.
In this work, we contribute to filling these gaps by presenting TimeSHAP, a model-agnostic recurrent explainer.
arXiv Detail & Related papers (2020-11-30T19:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.