Interpretable time series neural representation for classification
purposes
- URL: http://arxiv.org/abs/2310.16696v1
- Date: Wed, 25 Oct 2023 15:06:57 GMT
- Title: Interpretable time series neural representation for classification
purposes
- Authors: Etienne Le Naour, Ghislain Agoua, Nicolas Baskiotis, Vincent Guigue
- Abstract summary: The proposed model produces consistent, discrete, interpretable, and visualizable representations.
The experiments show that the proposed model yields, on average better results than other interpretable approaches on multiple datasets.
- Score: 3.1201323892302444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has made significant advances in creating efficient
representations of time series data by automatically identifying complex
patterns. However, these approaches lack interpretability, as the time series
is transformed into a latent vector that is not easily interpretable. On the
other hand, Symbolic Aggregate approximation (SAX) methods allow the creation
of symbolic representations that can be interpreted but do not capture complex
patterns effectively. In this work, we propose a set of requirements for a
neural representation of univariate time series to be interpretable. We propose
a new unsupervised neural architecture that meets these requirements. The
proposed model produces consistent, discrete, interpretable, and visualizable
representations. The model is learned independently of any downstream tasks in
an unsupervised setting to ensure robustness. As a demonstration of the
effectiveness of the proposed model, we propose experiments on classification
tasks using UCR archive datasets. The obtained results are extensively compared
to other interpretable models and state-of-the-art neural representation
learning models. The experiments show that the proposed model yields, on
average better results than other interpretable approaches on multiple
datasets. We also present qualitative experiments to asses the interpretability
of the approach.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Artificial neural networks and time series of counts: A class of
nonlinear INGARCH models [0.0]
It is shown how INGARCH models can be combined with artificial neural network (ANN) response functions to obtain a class of nonlinear INGARCH models.
The ANN framework allows for the interpretation of many existing INGARCH models as a degenerate version of a corresponding neural model.
The empirical analysis of time series of bounded and unbounded counts reveals that the neural INGARCH models are able to outperform reasonable degenerate competitor models in terms of the information loss.
arXiv Detail & Related papers (2023-04-03T14:26:16Z) - Learning Sparsity of Representations with Discrete Latent Variables [15.05207849434673]
We propose a sparse deep latent generative model SDLGM to explicitly model degree of sparsity.
The resulting sparsity of a representation is not fixed, but fits to the observation itself under the pre-defined restriction.
For inference and learning, we develop an amortized variational method based on MC gradient estimator.
arXiv Detail & Related papers (2023-04-03T12:47:18Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Learning to Reconstruct Missing Data from Spatiotemporal Graphs with
Sparse Observations [11.486068333583216]
This paper tackles the problem of learning effective models to reconstruct missing data points.
We propose a class of attention-based architectures, that given a set of highly sparse observations, learn a representation for points in time and space.
Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies.
arXiv Detail & Related papers (2022-05-26T16:40:48Z) - Fair Interpretable Representation Learning with Correction Vectors [60.0806628713968]
We propose a new framework for fair representation learning that is centered around the learning of "correction vectors"
We show experimentally that several fair representation learning models constrained in such a way do not exhibit losses in ranking or classification performance.
arXiv Detail & Related papers (2022-02-07T11:19:23Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - It's FLAN time! Summing feature-wise latent representations for
interpretability [0.0]
We propose a novel class of structurally-constrained neural networks, which we call FLANs (Feature-wise Latent Additive Networks)
FLANs process each input feature separately, computing for each of them a representation in a common latent space.
These feature-wise latent representations are then simply summed, and the aggregated representation is used for prediction.
arXiv Detail & Related papers (2021-06-18T12:19:33Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.