Persistence-Based Discretization for Learning Discrete Event Systems
from Time Series
- URL: http://arxiv.org/abs/2301.05041v2
- Date: Mon, 19 Jun 2023 09:37:25 GMT
- Title: Persistence-Based Discretization for Learning Discrete Event Systems
from Time Series
- Authors: L\'ena\"ig Cornanguer (LACODAM, IRISA), Christine Largou\"et (LACODAM,
IRISA), Laurence Roz\'e (LACODAM, IRISA), Alexandre Termier (LACODAM, IRISA)
- Abstract summary: Persist is a discretization method that intends to create persisting symbols by using a score called persistence score.
We replace the metric used in the persistence score, the Kullback-Leibler divergence, with the Wasserstein distance.
Experiments show that the improved persistence score enhances Persist's ability to capture the information of the original time series.
- Score: 50.591267188664666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To get a good understanding of a dynamical system, it is convenient to have
an interpretable and versatile model of it. Timed discrete event systems are a
kind of model that respond to these requirements. However, such models can be
inferred from timestamped event sequences but not directly from numerical data.
To solve this problem, a discretization step must be done to identify events or
symbols in the time series. Persist is a discretization method that intends to
create persisting symbols by using a score called persistence score. This
allows to mitigate the risk of undesirable symbol changes that would lead to a
too complex model. After the study of the persistence score, we point out that
it tends to favor excessive cases making it miss interesting persisting
symbols. To correct this behavior, we replace the metric used in the
persistence score, the Kullback-Leibler divergence, with the Wasserstein
distance. Experiments show that the improved persistence score enhances
Persist's ability to capture the information of the original time series and
that it makes it better suited for discrete event systems learning.
Related papers
- Contrastive Learning Is Not Optimal for Quasiperiodic Time Series [4.2807943283312095]
We introduce Distilled Embedding for Almost-Periodic Time Series (DEAPS) in this paper.
DEAPS is a non-contrastive method tailored for quasiperiodic time series, such as electrocardiogram (ECG) data.
We demonstrate a notable improvement of +10% over existing SOTA methods when just a few annotated records are presented to fit a Machine Learning (ML) model.
arXiv Detail & Related papers (2024-07-24T08:02:41Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Learning the Dynamics of Sparsely Observed Interacting Systems [0.6021787236982659]
We address the problem of learning the dynamics of an unknown non-parametric system linking a target and a feature time series.
By leveraging the rich theory of signatures, we are able to cast this non-linear problem as a high-dimensional linear regression.
arXiv Detail & Related papers (2023-01-27T10:48:28Z) - DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and
Temporal Relatedness [78.98998551326812]
We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors.
We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis.
We show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.
arXiv Detail & Related papers (2022-09-26T21:59:14Z) - Learning to Reconstruct Missing Data from Spatiotemporal Graphs with
Sparse Observations [11.486068333583216]
This paper tackles the problem of learning effective models to reconstruct missing data points.
We propose a class of attention-based architectures, that given a set of highly sparse observations, learn a representation for points in time and space.
Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies.
arXiv Detail & Related papers (2022-05-26T16:40:48Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Deep Explicit Duration Switching Models for Time Series [84.33678003781908]
We propose a flexible model that is capable of identifying both state- and time-dependent switching dynamics.
State-dependent switching is enabled by a recurrent state-to-switch connection.
An explicit duration count variable is used to improve the time-dependent switching behavior.
arXiv Detail & Related papers (2021-10-26T17:35:21Z) - Attention to Warp: Deep Metric Learning for Multivariate Time Series [28.540348999309547]
This paper proposes a novel neural network-based approach for robust yet discriminative time series classification and verification.
We experimentally demonstrate the superiority of the proposed approach over previous non-parametric and deep models.
arXiv Detail & Related papers (2021-03-28T07:54:01Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.