TSInsight: A local-global attribution framework for interpretability in
time-series data
- URL: http://arxiv.org/abs/2004.02958v1
- Date: Mon, 6 Apr 2020 19:34:25 GMT
- Title: TSInsight: A local-global attribution framework for interpretability in
time-series data
- Authors: Shoaib Ahmed Siddiqui, Dominique Mercier, Andreas Dengel, Sheraz Ahmed
- Abstract summary: We propose an auto-encoder to the classifier with a sparsity-inducing norm on its output and fine-tune it based on the gradients from the classifier and a reconstruction penalty.
TSInsight learns to preserve features that are important for prediction by the classifier and suppresses those that are irrelevant.
In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations.
- Score: 5.174367472975529
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the rise in the employment of deep learning methods in safety-critical
scenarios, interpretability is more essential than ever before. Although many
different directions regarding interpretability have been explored for visual
modalities, time-series data has been neglected with only a handful of methods
tested due to their poor intelligibility. We approach the problem of
interpretability in a novel way by proposing TSInsight where we attach an
auto-encoder to the classifier with a sparsity-inducing norm on its output and
fine-tune it based on the gradients from the classifier and a reconstruction
penalty. TSInsight learns to preserve features that are important for
prediction by the classifier and suppresses those that are irrelevant i.e.
serves as a feature attribution method to boost interpretability. In contrast
to most other attribution frameworks, TSInsight is capable of generating both
instance-based and model-based explanations. We evaluated TSInsight along with
9 other commonly used attribution methods on 8 different time-series datasets
to validate its efficacy. Evaluation results show that TSInsight naturally
achieves output space contraction, therefore, is an effective tool for the
interpretability of deep time-series models.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - From Link Prediction to Forecasting: Information Loss in Batch-based Temporal Graph Learning [0.716879432974126]
We show that the suitability of common batch-oriented evaluation depends on the datasets' characteristics.
We reformulate dynamic link prediction as a link forecasting task that better accounts for temporal information present in the data.
arXiv Detail & Related papers (2024-06-07T12:45:12Z) - TimeDRL: Disentangled Representation Learning for Multivariate Time-Series [10.99576829280084]
TimeDRL is a generic time-series representation learning framework with disentangled dual-level embeddings.
TimeDRL consistently surpasses existing representation learning approaches, achieving an average improvement of 58.02% in MSE and classification by 1.48% in accuracy.
arXiv Detail & Related papers (2023-12-07T08:56:44Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - On the Impact of Temporal Concept Drift on Model Explanations [31.390397997989712]
Explanation faithfulness of model predictions in natural language processing is evaluated on held-out data from the same temporal distribution as the training data.
We examine the impact of temporal variation on model explanations extracted by eight feature attribution methods and three select-then-predict models across six text classification tasks.
arXiv Detail & Related papers (2022-10-17T15:53:09Z) - Interpretable Research Replication Prediction via Variational Contextual
Consistency Sentence Masking [14.50690911709558]
Research Replication Prediction (RRP) is the task of predicting whether a published research result can be replicated or not.
In this work, we propose the Variational Contextual Consistency Sentence Masking (VCCSM) method to automatically extract key sentences.
Results of our experiments on RRP along with European Convention of Human Rights (ECHR) datasets demonstrate that VCCSM is able to improve the model interpretability for the long document classification tasks.
arXiv Detail & Related papers (2022-03-28T03:27:13Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Interpretable Time-series Representation Learning With Multi-Level
Disentanglement [56.38489708031278]
Disentangle Time Series (DTS) is a novel disentanglement enhancement framework for sequential data.
DTS generates hierarchical semantic concepts as the interpretable and disentangled representation of time-series.
DTS achieves superior performance in downstream applications, with high interpretability of semantic concepts.
arXiv Detail & Related papers (2021-05-17T22:02:24Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z) - Remaining Useful Life Estimation Under Uncertainty with Causal GraphNets [0.0]
A novel approach for the construction and training of time series models is presented.
The proposed method is appropriate for constructing predictive models for non-stationary time series.
arXiv Detail & Related papers (2020-11-23T21:28:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.