Multi-Modal Prototype Learning for Interpretable Multivariable Time
Series Classification
- URL: http://arxiv.org/abs/2106.09636v1
- Date: Thu, 17 Jun 2021 16:32:47 GMT
- Title: Multi-Modal Prototype Learning for Interpretable Multivariable Time
Series Classification
- Authors: Gaurav R. Ghosal and Reza Abbasi-Asl
- Abstract summary: Multivariable time series classification problems are increasing in prevalence and complexity.
Deep learning methods are an effective tool for these problems, but they often lack interpretability.
We propose a novel modular prototype learning framework for multivariable time series classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multivariable time series classification problems are increasing in
prevalence and complexity in a variety of domains, such as biology and finance.
While deep learning methods are an effective tool for these problems, they
often lack interpretability. In this work, we propose a novel modular prototype
learning framework for multivariable time series classification. In the first
stage of our framework, encoders extract features from each variable
independently. Prototype layers identify single-variable prototypes in the
resulting feature spaces. The next stage of our framework represents the
multivariable time series sample points in terms of their similarity to these
single-variable prototypes. This results in an inherently interpretable
representation of multivariable patterns, on which prototype learning is
applied to extract representative examples i.e. multivariable prototypes. Our
framework is thus able to explicitly identify both informative patterns in the
individual variables, as well as the relationships between the variables. We
validate our framework on a simulated dataset with embedded patterns, as well
as a real human activity recognition problem. Our framework attains comparable
or superior classification performance to existing time series classification
methods on these tasks. On the simulated dataset, we find that our model
returns interpretations consistent with the embedded patterns. Moreover, the
interpretations learned on the activity recognition dataset align with domain
knowledge.
Related papers
- VSFormer: Value and Shape-Aware Transformer with Prior-Enhanced Self-Attention for Multivariate Time Series Classification [47.92529531621406]
We propose a novel method, VSFormer, that incorporates both discriminative patterns (shape) and numerical information (value)
In addition, we extract class-specific prior information derived from supervised information to enrich the positional encoding.
Extensive experiments on all 30 UEA archived datasets demonstrate the superior performance of our method compared to SOTA models.
arXiv Detail & Related papers (2024-12-21T07:31:22Z) - On the Regularization of Learnable Embeddings for Time Series Processing [18.069747511100132]
We investigate methods to regularize the learning of local learnable embeddings for time series processing.
We show that methods preventing the co-adaptation of local and global parameters are particularly effective in this context.
arXiv Detail & Related papers (2024-10-18T17:30:20Z) - Towards Generalisable Time Series Understanding Across Domains [10.350643783811174]
We introduce a novel pre-training paradigm specifically designed to handle time series heterogeneity.
We propose a tokeniser with learnable domain signatures, a dual masking strategy, and a normalised cross-correlation loss.
Our code and pre-trained weights are available at https://www.oetu.com/oetu/otis.
arXiv Detail & Related papers (2024-10-09T17:09:30Z) - UniTST: Effectively Modeling Inter-Series and Intra-Series Dependencies for Multivariate Time Series Forecasting [98.12558945781693]
We propose a transformer-based model UniTST containing a unified attention mechanism on the flattened patch tokens.
Although our proposed model employs a simple architecture, it offers compelling performance as shown in our experiments on several datasets for time series forecasting.
arXiv Detail & Related papers (2024-06-07T14:39:28Z) - Leveraging 2D Information for Long-term Time Series Forecasting with Vanilla Transformers [55.475142494272724]
Time series prediction is crucial for understanding and forecasting complex dynamics in various domains.
We introduce GridTST, a model that combines the benefits of two approaches using innovative multi-directional attentions.
The model consistently delivers state-of-the-art performance across various real-world datasets.
arXiv Detail & Related papers (2024-05-22T16:41:21Z) - Advancing Time Series Classification with Multimodal Language Modeling [6.624754582682479]
We propose InstructTime, a novel attempt to reshape time series classification as a learning-to-generate paradigm.
The core idea is to formulate the classification of time series as a multimodal understanding task, in which both task-specific instructions and raw time series are treated as multimodal inputs.
Extensive experiments are conducted over benchmark datasets, whose results uncover the superior performance of InstructTime.
arXiv Detail & Related papers (2024-03-19T02:32:24Z) - ConvTimeNet: A Deep Hierarchical Fully Convolutional Model for Multivariate Time Series Analysis [7.979501926410114]
ConvTimeNet is a hierarchical pure convolutional model designed for time series analysis.
It adaptively perceives local patterns of temporally dependent basic units in a data-driven manner.
A large kernel mechanism is employed to ensure that convolutional blocks can be deeply stacked.
arXiv Detail & Related papers (2024-03-03T12:05:49Z) - Compatible Transformer for Irregularly Sampled Multivariate Time Series [75.79309862085303]
We propose a transformer-based encoder to achieve comprehensive temporal-interaction feature learning for each individual sample.
We conduct extensive experiments on 3 real-world datasets and validate that the proposed CoFormer significantly and consistently outperforms existing methods.
arXiv Detail & Related papers (2023-10-17T06:29:09Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Mimic: An adaptive algorithm for multivariate time series classification [11.49627617337276]
Time series data are valuable but are often inscrutable.
Gaining trust in time series classifiers for finance, healthcare, and other critical applications may rely on creating interpretable models.
We propose a novel Mimic algorithm that retains the predictive accuracy of the strongest classifiers while introducing interpretability.
arXiv Detail & Related papers (2021-11-08T04:47:31Z) - Instance-wise Graph-based Framework for Multivariate Time Series
Forecasting [69.38716332931986]
We propose a simple yet efficient instance-wise graph-based framework to utilize the inter-dependencies of different variables at different time stamps.
The key idea of our framework is aggregating information from the historical time series of different variables to the current time series that we need to forecast.
arXiv Detail & Related papers (2021-09-14T07:38:35Z) - Explainable Multivariate Time Series Classification: A Deep Neural
Network Which Learns To Attend To Important Variables As Well As Informative
Time Intervals [32.30627405832656]
Time series data is prevalent in a wide variety of real-world applications.
Key criterion to understand such predictive models involves elucidating and quantifying the contribution of time varying input variables to the classification.
We introduce a novel, modular, convolution-based feature extraction and attention mechanism that simultaneously identifies the variables as well as time intervals which determine the classification output.
arXiv Detail & Related papers (2020-11-23T19:16:46Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.