Explainable Multivariate Time Series Classification: A Deep Neural
Network Which Learns To Attend To Important Variables As Well As Informative
Time Intervals
- URL: http://arxiv.org/abs/2011.11631v1
- Date: Mon, 23 Nov 2020 19:16:46 GMT
- Title: Explainable Multivariate Time Series Classification: A Deep Neural
Network Which Learns To Attend To Important Variables As Well As Informative
Time Intervals
- Authors: Tsung-Yu Hsieh, Suhang Wang, Yiwei Sun, Vasant Honavar
- Abstract summary: Time series data is prevalent in a wide variety of real-world applications.
Key criterion to understand such predictive models involves elucidating and quantifying the contribution of time varying input variables to the classification.
We introduce a novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classification output.
- Score: 32.30627405832656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Time series data is prevalent in a wide variety of real-world applications
and it calls for trustworthy and explainable models for people to understand
and fully trust decisions made by AI solutions. We consider the problem of
building explainable classifiers from multi-variate time series data. A key
criterion to understand such predictive models involves elucidating and
quantifying the contribution of time varying input variables to the
classification. Hence, we introduce a novel, modular, convolution-based feature
extraction and attention mechanism that simultaneously identifies the variables
as well as time intervals which determine the classifier output. We present
results of extensive experiments with several benchmark data sets that show
that the proposed method outperforms the state-of-the-art baseline methods on
multi-variate time series classification task. The results of our case studies
demonstrate that the variables and time intervals identified by the proposed
method make sense relative to available domain knowledge.
Related papers
- Compatible Transformer for Irregularly Sampled Multivariate Time Series [75.79309862085303]
We propose a transformer-based encoder to achieve comprehensive temporal-interaction feature learning for each individual sample.
We conduct extensive experiments on 3 real-world datasets and validate that the proposed CoFormer significantly and consistently outperforms existing methods.
arXiv Detail & Related papers (2023-10-17T06:29:09Z) - Evaluating Explanation Methods for Multivariate Time Series
Classification [4.817429789586127]
The main focus of this paper is on analysing and evaluating explanation methods tailored to Multivariate Time Series Classification (MTSC)
We focus on saliency-based explanation methods that can point out the most relevant channels and time series points for the classification decision.
We study these methods on 3 synthetic datasets and 2 real-world datasets and provide a quantitative and qualitative analysis of the explanations provided.
arXiv Detail & Related papers (2023-08-29T11:24:12Z) - DIVERSIFY: A General Framework for Time Series Out-of-distribution
Detection and Generalization [58.704753031608625]
Time series is one of the most challenging modalities in machine learning research.
OOD detection and generalization on time series tend to suffer due to its non-stationary property.
We propose DIVERSIFY, a framework for OOD detection and generalization on dynamic distributions of time series.
arXiv Detail & Related papers (2023-08-04T12:27:11Z) - Robust Explainer Recommendation for Time Series Classification [4.817429789586127]
Time series classification is a task common in domains such as human activity recognition, sports analytics and general sensing.
Recently, a great variety of techniques have been proposed and adapted for time series to provide explanation in the form of saliency maps.
This paper provides a novel framework to quantitatively evaluate and rank explanation methods for time series classification.
arXiv Detail & Related papers (2023-06-08T18:49:23Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Mimic: An adaptive algorithm for multivariate time series classification [11.49627617337276]
Time series data are valuable but are often inscrutable.
Gaining trust in time series classifiers for finance, healthcare, and other critical applications may rely on creating interpretable models.
We propose a novel Mimic algorithm that retains the predictive accuracy of the strongest classifiers while introducing interpretability.
arXiv Detail & Related papers (2021-11-08T04:47:31Z) - Instance-wise Graph-based Framework for Multivariate Time Series
Forecasting [69.38716332931986]
We propose a simple yet efficient instance-wise graph-based framework to utilize the inter-dependencies of different variables at different time stamps.
The key idea of our framework is aggregating information from the historical time series of different variables to the current time series that we need to forecast.
arXiv Detail & Related papers (2021-09-14T07:38:35Z) - On Disentanglement in Gaussian Process Variational Autoencoders [3.403279506246879]
We introduce a class of models recently introduced that have been successful in different tasks on time series data.
Our model exploits the temporal structure of the data by modeling each latent channel with a GP prior and employing a structured variational distribution.
We provide evidence that we can learn meaningful disentangled representations on real-world medical time series data.
arXiv Detail & Related papers (2021-02-10T15:49:27Z) - Learning summary features of time series for likelihood free inference [93.08098361687722]
We present a data-driven strategy for automatically learning summary features from time series data.
Our results indicate that learning summary features from data can compete and even outperform LFI methods based on hand-crafted values.
arXiv Detail & Related papers (2020-12-04T19:21:37Z) - Multivariable times series classification through an interpretable
representation [0.0]
We propose a time series classification method that considers an alternative representation of time series through a set of descriptive features.
We have applied traditional classification algorithms obtaining interpretable and competitive results.
arXiv Detail & Related papers (2020-09-08T09:44:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.