Pyramidal Hidden Markov Model For Multivariate Time Series Forecasting
- URL: http://arxiv.org/abs/2310.14341v2
- Date: Tue, 27 Feb 2024 13:10:07 GMT
- Title: Pyramidal Hidden Markov Model For Multivariate Time Series Forecasting
- Authors: YeXin Huang
- Abstract summary: The Hidden Markov Model (HMM) can predict the future value of a time series based on its current and previous values.
We propose a Pyramidal Hidden Markov Model (PHMM) that can capture multiple multistep states.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Hidden Markov Model (HMM) can predict the future value of a time series
based on its current and previous values, making it a powerful algorithm for
handling various types of time series. Numerous studies have explored the
improvement of HMM using advanced techniques, leading to the development of
several variations of HMM. Despite these studies indicating the increased
competitiveness of HMM compared to other advanced algorithms, few have
recognized the significance and impact of incorporating multistep stochastic
states into its performance. In this work, we propose a Pyramidal Hidden Markov
Model (PHMM) that can capture multiple multistep stochastic states. Initially,
a multistep HMM is designed for extracting short multistep stochastic states.
Next, a novel time series forecasting structure is proposed based on PHMM,
which utilizes pyramid-like stacking to adaptively identify long multistep
stochastic states. By employing these two schemes, our model can effectively
handle non-stationary and noisy data, while also establishing long-term
dependencies for more accurate and comprehensive forecasting. The experimental
results on diverse multivariate time series datasets convincingly demonstrate
the superior performance of our proposed PHMM compared to its competitive peers
in time series forecasting.
Related papers
- xLSTM-Mixer: Multivariate Time Series Forecasting by Mixing via Scalar Memories [20.773694998061707]
Time series data is prevalent across numerous fields, necessitating the development of robust and accurate forecasting models.
We introduce xLSTM-Mixer, a model designed to effectively integrate temporal sequences, joint time-variable information, and multiple perspectives for robust forecasting.
Our evaluations demonstrate xLSTM-Mixer's superior long-term forecasting performance compared to recent state-of-the-art methods.
arXiv Detail & Related papers (2024-10-22T11:59:36Z) - Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts [103.725112190618]
This paper introduces Moirai-MoE, using a single input/output projection layer while delegating the modeling of diverse time series patterns to the sparse mixture of experts.
Extensive experiments on 39 datasets demonstrate the superiority of Moirai-MoE over existing foundation models in both in-distribution and zero-shot scenarios.
arXiv Detail & Related papers (2024-10-14T13:01:11Z) - MGCP: A Multi-Grained Correlation based Prediction Network for Multivariate Time Series [54.91026286579748]
We propose a Multi-Grained Correlations-based Prediction Network.
It simultaneously considers correlations at three levels to enhance prediction performance.
It employs adversarial training with an attention mechanism-based predictor and conditional discriminator to optimize prediction results at coarse-grained level.
arXiv Detail & Related papers (2024-05-30T03:32:44Z) - TimeMixer: Decomposable Multiscale Mixing for Time Series Forecasting [19.88184356154215]
Time series forecasting is widely used in applications, such as traffic planning and weather forecasting.
TimeMixer is able to achieve consistent state-of-the-art performances in both long-term and short-term forecasting tasks.
arXiv Detail & Related papers (2024-05-23T14:27:07Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - A Multi-Scale Decomposition MLP-Mixer for Time Series Analysis [14.40202378972828]
We propose MSD-Mixer, a Multi-Scale Decomposition-Mixer, which learns to explicitly decompose and represent the input time series in its different layers.
We demonstrate that MSD-Mixer consistently and significantly outperforms other state-of-the-art algorithms with better efficiency.
arXiv Detail & Related papers (2023-10-18T13:39:07Z) - The Capacity and Robustness Trade-off: Revisiting the Channel
Independent Strategy for Multivariate Time Series Forecasting [50.48888534815361]
We show that models trained with the Channel Independent (CI) strategy outperform those trained with the Channel Dependent (CD) strategy.
Our results conclude that the CD approach has higher capacity but often lacks robustness to accurately predict distributionally drifted time series.
We propose a modified CD method called Predict Residuals with Regularization (PRReg) that can surpass the CI strategy.
arXiv Detail & Related papers (2023-04-11T13:15:33Z) - Ti-MAE: Self-Supervised Masked Time Series Autoencoders [16.98069693152999]
We propose a novel framework named Ti-MAE, in which the input time series are assumed to follow an integrate distribution.
Ti-MAE randomly masks out embedded time series data and learns an autoencoder to reconstruct them at the point-level.
Experiments on several public real-world datasets demonstrate that our framework of masked autoencoding could learn strong representations directly from the raw data.
arXiv Detail & Related papers (2023-01-21T03:20:23Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Fuzzy Cognitive Maps and Hidden Markov Models: Comparative Analysis of
Efficiency within the Confines of the Time Series Classification Task [0.0]
We explore the application of Hidden Markov Model (HMM) for time series classification.
We identify four models, HMM NN (HMM, one per series), HMM 1C (HMM, one per class), FCM NN, and FCM 1C are then studied in a series of experiments.
arXiv Detail & Related papers (2022-04-28T12:41:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.