PatchX: Explaining Deep Models by Intelligible Pattern Patches for
Time-series Classification
- URL: http://arxiv.org/abs/2102.05917v1
- Date: Thu, 11 Feb 2021 10:08:09 GMT
- Title: PatchX: Explaining Deep Models by Intelligible Pattern Patches for
Time-series Classification
- Authors: Dominique Mercier, Andreas Dengel, Sheraz Ahmed
- Abstract summary: We propose a novel hybrid approach that utilizes deep neural networks and traditional machine learning algorithms.
Our method first performs a fine-grained classification for the patches followed by sample level classification.
- Score: 6.820831423843006
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The classification of time-series data is pivotal for streaming data and
comes with many challenges. Although the amount of publicly available datasets
increases rapidly, deep neural models are only exploited in a few areas.
Traditional methods are still used very often compared to deep neural models.
These methods get preferred in safety-critical, financial, or medical fields
because of their interpretable results. However, their performance and
scale-ability are limited, and finding suitable explanations for time-series
classification tasks is challenging due to the concepts hidden in the numerical
time-series data. Visualizing complete time-series results in a cognitive
overload concerning our perception and leads to confusion. Therefore, we
believe that patch-wise processing of the data results in a more interpretable
representation. We propose a novel hybrid approach that utilizes deep neural
networks and traditional machine learning algorithms to introduce an
interpretable and scale-able time-series classification approach. Our method
first performs a fine-grained classification for the patches followed by sample
level classification.
Related papers
- An End-to-End Model for Time Series Classification In the Presence of Missing Values [25.129396459385873]
Time series classification with missing data is a prevalent issue in time series analysis.
This study proposes an end-to-end neural network that unifies data imputation and representation learning within a single framework.
arXiv Detail & Related papers (2024-08-11T19:39:12Z) - Multivariate Time Series Early Classification Across Channel and Time
Dimensions [3.5786621294068373]
We propose a more flexible early classification pipeline that offers a more granular consideration of input channels.
Our method can enhance the early classification paradigm by achieving improved accuracy for equal input utilization.
arXiv Detail & Related papers (2023-06-26T11:30:33Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - TimeREISE: Time-series Randomized Evolving Input Sample Explanation [5.557646286040063]
TimeREISE is a model attribution method specifically aligned to success in the context of time series classification.
The method shows superior performance compared to existing approaches concerning different well-established measurements.
arXiv Detail & Related papers (2022-02-16T09:40:13Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Visualising Deep Network's Time-Series Representations [93.73198973454944]
Despite the popularisation of machine learning models, more often than not they still operate as black boxes with no insight into what is happening inside the model.
In this paper, a method that addresses that issue is proposed, with a focus on visualising multi-dimensional time-series data.
Experiments on a high-frequency stock market dataset show that the method provides fast and discernible visualisations.
arXiv Detail & Related papers (2021-03-12T09:53:34Z) - Deep learning for time series classification [2.0305676256390934]
Time series analysis allows us to visualize and understand the evolution of a process over time.
Time series classification consists of constructing algorithms dedicated to automatically label time series data.
Deep learning has emerged as one of the most effective methods for tackling the supervised classification task.
arXiv Detail & Related papers (2020-10-01T17:38:40Z) - Solving Long-tailed Recognition with Deep Realistic Taxonomic Classifier [68.38233199030908]
Long-tail recognition tackles the natural non-uniformly distributed data in realworld scenarios.
While moderns perform well on populated classes, its performance degrades significantly on tail classes.
Deep-RTC is proposed as a new solution to the long-tail problem, combining realism with hierarchical predictions.
arXiv Detail & Related papers (2020-07-20T05:57:42Z) - Temporal Calibrated Regularization for Robust Noisy Label Learning [60.90967240168525]
Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.
However, labeling large-scale data can be very costly and error-prone so that it is difficult to guarantee the annotation quality.
We propose a Temporal Calibrated Regularization (TCR) in which we utilize the original labels and the predictions in the previous epoch together.
arXiv Detail & Related papers (2020-07-01T04:48:49Z) - Fine-Grain Few-Shot Vision via Domain Knowledge as Hyperspherical Priors [79.22051549519989]
Prototypical networks have been shown to perform well at few-shot learning tasks in computer vision.
We show how we can achieve few-shot fine-grain classification by maximally separating the classes while incorporating domain knowledge as informative priors.
arXiv Detail & Related papers (2020-05-23T02:10:57Z) - Conditional Mutual information-based Contrastive Loss for Financial Time
Series Forecasting [12.0855096102517]
We present a representation learning framework for financial time series forecasting.
In this paper, we propose to first learn compact representations from time series data, then use the learned representations to train a simpler model for predicting time series movements.
arXiv Detail & Related papers (2020-02-18T15:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.