Persistence-Based Discretization for Learning Discrete Event Systems
from Time Series
- URL: http://arxiv.org/abs/2301.05041v2
- Date: Mon, 19 Jun 2023 09:37:25 GMT
- Title: Persistence-Based Discretization for Learning Discrete Event Systems
from Time Series
- Authors: L\'ena\"ig Cornanguer (LACODAM, IRISA), Christine Largou\"et (LACODAM,
IRISA), Laurence Roz\'e (LACODAM, IRISA), Alexandre Termier (LACODAM, IRISA)
- Abstract summary: Persist is a discretization method that intends to create persisting symbols by using a score called persistence score.
We replace the metric used in the persistence score, the Kullback-Leibler divergence, with the Wasserstein distance.
Experiments show that the improved persistence score enhances Persist's ability to capture the information of the original time series.
- Score: 50.591267188664666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To get a good understanding of a dynamical system, it is convenient to have
an interpretable and versatile model of it. Timed discrete event systems are a
kind of model that respond to these requirements. However, such models can be
inferred from timestamped event sequences but not directly from numerical data.
To solve this problem, a discretization step must be done to identify events or
symbols in the time series. Persist is a discretization method that intends to
create persisting symbols by using a score called persistence score. This
allows to mitigate the risk of undesirable symbol changes that would lead to a
too complex model. After the study of the persistence score, we point out that
it tends to favor excessive cases making it miss interesting persisting
symbols. To correct this behavior, we replace the metric used in the
persistence score, the Kullback-Leibler divergence, with the Wasserstein
distance. Experiments show that the improved persistence score enhances
Persist's ability to capture the information of the original time series and
that it makes it better suited for discrete event systems learning.
Related papers
- Motion Code: Robust Time series Classification and Forecasting via Sparse Variational Multi-Stochastic Processes Learning [3.2857981869020327]
We introduce a novel framework that considers each time series as a sample realization of a continuous-time process.
Such mathematical model explicitly captures the data dependence across several timestamps and detects the hidden time-dependent signals from noise.
We then propose the abstract concept of the most informative timestamps to infer a sparse approximation of the individual dynamics based on their assigned vectors.
arXiv Detail & Related papers (2024-02-21T19:10:08Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Continuous-time convolutions model of event sequences [53.36665135225617]
Huge samples of event sequences data occur in various domains, including e-commerce, healthcare, and finance.
The amount of available data and the length of event sequences per client are typically large, thus it requires long-term modelling.
We propose the COTIC method based on a continuous convolution neural network suitable for non-uniform occurrence of events in time.
arXiv Detail & Related papers (2023-02-13T10:34:51Z) - Learning the Dynamics of Sparsely Observed Interacting Systems [0.6021787236982659]
We address the problem of learning the dynamics of an unknown non-parametric system linking a target and a feature time series.
By leveraging the rich theory of signatures, we are able to cast this non-linear problem as a high-dimensional linear regression.
arXiv Detail & Related papers (2023-01-27T10:48:28Z) - Learning to Reconstruct Missing Data from Spatiotemporal Graphs with
Sparse Observations [11.486068333583216]
This paper tackles the problem of learning effective models to reconstruct missing data points.
We propose a class of attention-based architectures, that given a set of highly sparse observations, learn a representation for points in time and space.
Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies.
arXiv Detail & Related papers (2022-05-26T16:40:48Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Deep Explicit Duration Switching Models for Time Series [84.33678003781908]
We propose a flexible model that is capable of identifying both state- and time-dependent switching dynamics.
State-dependent switching is enabled by a recurrent state-to-switch connection.
An explicit duration count variable is used to improve the time-dependent switching behavior.
arXiv Detail & Related papers (2021-10-26T17:35:21Z) - Attention to Warp: Deep Metric Learning for Multivariate Time Series [28.540348999309547]
This paper proposes a novel neural network-based approach for robust yet discriminative time series classification and verification.
We experimentally demonstrate the superiority of the proposed approach over previous non-parametric and deep models.
arXiv Detail & Related papers (2021-03-28T07:54:01Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.