Markov Modeling of Time-Series Data using Symbolic Analysis
- URL: http://arxiv.org/abs/2103.11238v2
- Date: Tue, 23 Mar 2021 19:38:10 GMT
- Title: Markov Modeling of Time-Series Data using Symbolic Analysis
- Authors: Devesh K. Jha
- Abstract summary: We will review the different techniques for discretization and memory estimation for discrete processes.
We will present some results from literature on partitioning from dynamical systems theory and order estimation using concepts of information theory and statistical learning.
- Score: 8.522582405896653
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Markov models are often used to capture the temporal patterns of sequential
data for statistical learning applications. While the Hidden Markov
modeling-based learning mechanisms are well studied in literature, we analyze a
symbolic-dynamics inspired approach. Under this umbrella, Markov modeling of
time-series data consists of two major steps -- discretization of continuous
attributes followed by estimating the size of temporal memory of the
discretized sequence. These two steps are critical for the accurate and concise
representation of time-series data in the discrete space. Discretization
governs the information content of the resultant discretized sequence. On the
other hand, memory estimation of the symbolic sequence helps to extract the
predictive patterns in the discretized data. Clearly, the effectiveness of
signal representation as a discrete Markov process depends on both these steps.
In this paper, we will review the different techniques for discretization and
memory estimation for discrete stochastic processes. In particular, we will
focus on the individual problems of discretization and order estimation for
discrete stochastic process. We will present some results from literature on
partitioning from dynamical systems theory and order estimation using concepts
of information theory and statistical learning. The paper also presents some
related problem formulations which will be useful for machine learning and
statistical learning application using the symbolic framework of data analysis.
We present some results of statistical analysis of a complex thermoacoustic
instability phenomenon during lean-premixed combustion in jet-turbine engines
using the proposed Markov modeling method.
Related papers
- Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis [56.442307356162864]
We study the theoretical aspects of score-based discrete diffusion models under the Continuous Time Markov Chain (CTMC) framework.
We introduce a discrete-time sampling algorithm in the general state space $[S]d$ that utilizes score estimators at predefined time points.
Our convergence analysis employs a Girsanov-based method and establishes key properties of the discrete score function.
arXiv Detail & Related papers (2024-10-03T09:07:13Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Information Theory Inspired Pattern Analysis for Time-series Data [60.86880787242563]
We propose a highly generalizable method that uses information theory-based features to identify and learn from patterns in time-series data.
For applications with state transitions, features are developed based on Shannon's entropy of Markov chains, entropy rates of Markov chains, and von Neumann entropy of Markov chains.
The results show the proposed information theory-based features improve the recall rate, F1 score, and accuracy on average by up to 23.01% compared with the baseline models.
arXiv Detail & Related papers (2023-02-22T21:09:35Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Wasserstein multivariate auto-regressive models for modeling distributional time series [0.0]
We propose a new auto-regressive model for the statistical analysis of multivariate distributional time series.
Results on the existence, uniqueness and stationarity of the solution of such a model are provided.
To shed some light on the benefits of our approach for real data analysis, we also apply this methodology to a data set made of observations from age distribution in different countries.
arXiv Detail & Related papers (2022-07-12T10:18:36Z) - Learning to Reconstruct Missing Data from Spatiotemporal Graphs with
Sparse Observations [11.486068333583216]
This paper tackles the problem of learning effective models to reconstruct missing data points.
We propose a class of attention-based architectures, that given a set of highly sparse observations, learn a representation for points in time and space.
Compared to the state of the art, our model handles sparse data without propagating prediction errors or requiring a bidirectional model to encode forward and backward time dependencies.
arXiv Detail & Related papers (2022-05-26T16:40:48Z) - Markov Chain Monte Carlo for Continuous-Time Switching Dynamical Systems [26.744964200606784]
We propose a novel inference algorithm utilizing a Markov Chain Monte Carlo approach.
The presented Gibbs sampler allows to efficiently obtain samples from the exact continuous-time posterior processes.
arXiv Detail & Related papers (2022-05-18T09:03:00Z) - Scalable Intervention Target Estimation in Linear Models [52.60799340056917]
Current approaches to causal structure learning either work with known intervention targets or use hypothesis testing to discover the unknown intervention targets.
This paper proposes a scalable and efficient algorithm that consistently identifies all intervention targets.
The proposed algorithm can be used to also update a given observational Markov equivalence class into the interventional Markov equivalence class.
arXiv Detail & Related papers (2021-11-15T03:16:56Z) - Comparative Analysis of the Hidden Markov Model and LSTM: A Simulative
Approach [0.0]
We show that a hidden Markov model can still be an effective method to process the sequence data even when the first-order Markov assumption is not satisfied.
Our results indicate that even an unsupervised hidden Markov model can outperform LSTM when a massive amount of labeled data is not available.
arXiv Detail & Related papers (2020-08-09T22:13:10Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.