TimeREISE: Time-series Randomized Evolving Input Sample Explanation
- URL: http://arxiv.org/abs/2202.07952v1
- Date: Wed, 16 Feb 2022 09:40:13 GMT
- Title: TimeREISE: Time-series Randomized Evolving Input Sample Explanation
- Authors: Dominique Mercier, Andreas Dengel, Sheraz Ahmed
- Abstract summary: TimeREISE is a model attribution method specifically aligned to success in the context of time series classification.
The method shows superior performance compared to existing approaches concerning different well-established measurements.
- Score: 5.557646286040063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are one of the most successful classifiers across
different domains. However, due to their limitations concerning
interpretability their use is limited in safety critical context. The research
field of explainable artificial intelligence addresses this problem. However,
most of the interpretability methods are aligned to the image modality by
design. The paper introduces TimeREISE a model agnostic attribution method
specifically aligned to success in the context of time series classification.
The method shows superior performance compared to existing approaches
concerning different well-established measurements. TimeREISE is applicable to
any time series classification network, its runtime does not scale in a linear
manner concerning the input shape and it does not rely on prior data knowledge.
Related papers
- TSI-Bench: Benchmarking Time Series Imputation [52.27004336123575]
TSI-Bench is a comprehensive benchmark suite for time series imputation utilizing deep learning techniques.
The TSI-Bench pipeline standardizes experimental settings to enable fair evaluation of imputation algorithms.
TSI-Bench innovatively provides a systematic paradigm to tailor time series forecasting algorithms for imputation purposes.
arXiv Detail & Related papers (2024-06-18T16:07:33Z) - TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling [67.02157180089573]
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks.
This paper proposes TimeSiam as a simple but effective self-supervised pre-training framework for Time series based on Siamese networks.
arXiv Detail & Related papers (2024-02-04T13:10:51Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Direct Embedding of Temporal Network Edges via Time-Decayed Line Graphs [51.51417735550026]
Methods for machine learning on temporal networks generally exhibit at least one of two limitations.
We present a simple method that avoids both shortcomings: construct the line graph of the network, which includes a node for each interaction, and weigh the edges of this graph based on the difference in time between interactions.
Empirical results on real-world networks demonstrate our method's efficacy and efficiency on both edge classification and temporal link prediction.
arXiv Detail & Related papers (2022-09-30T18:24:13Z) - On Principal Curve-Based Classifiers and Similarity-Based Selective
Sampling in Time-Series [0.0]
This paper proposes a deterministic selective sampling algorithm with the same computational steps, both by use of principal curve as their building block in model definition.
Considering the labeling costs and problems in online monitoring devices, there should be an algorithm that finds the data points which knowing their labels will cause in better performance of the classifier.
arXiv Detail & Related papers (2022-04-10T07:28:18Z) - The FreshPRINCE: A Simple Transformation Based Pipeline Time Series
Classifier [0.0]
We look at whether the complexity of the algorithms considered state of the art is really necessary.
Many times the first approach suggested is a simple pipeline of summary statistics or other time series feature extraction approaches.
We test these approaches on the UCR time series dataset archive, looking to see if TSC literature has overlooked the effectiveness of these approaches.
arXiv Detail & Related papers (2022-01-28T11:23:58Z) - Time Series Analysis via Network Science: Concepts and Algorithms [62.997667081978825]
This review provides a comprehensive overview of existing mapping methods for transforming time series into networks.
We describe the main conceptual approaches, provide authoritative references and give insight into their advantages and limitations in a unified notation and language.
Although still very recent, this research area has much potential and with this survey we intend to pave the way for future research on the topic.
arXiv Detail & Related papers (2021-10-11T13:33:18Z) - Multivariate Time Series Imputation by Graph Neural Networks [13.308026049048717]
We introduce a graph neural network architecture, named GRIL, which aims at reconstructing missing data in different channels of a multivariate time series.
Preliminary results show that our model outperforms state-of-the-art methods in the imputation task on relevant benchmarks.
arXiv Detail & Related papers (2021-07-31T17:47:10Z) - Attention to Warp: Deep Metric Learning for Multivariate Time Series [28.540348999309547]
This paper proposes a novel neural network-based approach for robust yet discriminative time series classification and verification.
We experimentally demonstrate the superiority of the proposed approach over previous non-parametric and deep models.
arXiv Detail & Related papers (2021-03-28T07:54:01Z) - PatchX: Explaining Deep Models by Intelligible Pattern Patches for
Time-series Classification [6.820831423843006]
We propose a novel hybrid approach that utilizes deep neural networks and traditional machine learning algorithms.
Our method first performs a fine-grained classification for the patches followed by sample level classification.
arXiv Detail & Related papers (2021-02-11T10:08:09Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.