The Bayesian Context Trees State Space Model for time series modelling
and forecasting
- URL: http://arxiv.org/abs/2308.00913v1
- Date: Wed, 2 Aug 2023 02:40:42 GMT
- Title: The Bayesian Context Trees State Space Model for time series modelling
and forecasting
- Authors: Ioannis Papageorgiou, Ioannis Kontoyiannis
- Abstract summary: A hierarchical Bayesian framework is introduced for developing rich mixture models for real-valued time series.
At the top level, meaningful discrete states are identified as appropriately quantised values of some of the most recent samples.
At the bottom level, a different, arbitrary model for real-valued time series - a base model - is associated with each state.
- Score: 8.37609145576126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A hierarchical Bayesian framework is introduced for developing rich mixture
models for real-valued time series, along with a collection of effective tools
for learning and inference. At the top level, meaningful discrete states are
identified as appropriately quantised values of some of the most recent
samples. This collection of observable states is described as a discrete
context-tree model. Then, at the bottom level, a different, arbitrary model for
real-valued time series - a base model - is associated with each state. This
defines a very general framework that can be used in conjunction with any
existing model class to build flexible and interpretable mixture models. We
call this the Bayesian Context Trees State Space Model, or the BCT-X framework.
Efficient algorithms are introduced that allow for effective, exact Bayesian
inference; in particular, the maximum a posteriori probability (MAP)
context-tree model can be identified. These algorithms can be updated
sequentially, facilitating efficient online forecasting. The utility of the
general framework is illustrated in two particular instances: When
autoregressive (AR) models are used as base models, resulting in a nonlinear AR
mixture model, and when conditional heteroscedastic (ARCH) models are used,
resulting in a mixture model that offers a powerful and systematic way of
modelling the well-known volatility asymmetries in financial data. In
forecasting, the BCT-X methods are found to outperform state-of-the-art
techniques on simulated and real-world data, both in terms of accuracy and
computational requirements. In modelling, the BCT-X structure finds natural
structure present in the data. In particular, the BCT-ARCH model reveals a
novel, important feature of stock market index data, in the form of an enhanced
leverage effect.
Related papers
- Supervised Score-Based Modeling by Gradient Boosting [49.556736252628745]
We propose a Supervised Score-based Model (SSM) which can be viewed as a gradient boosting algorithm combining score matching.
We provide a theoretical analysis of learning and sampling for SSM to balance inference time and prediction accuracy.
Our model outperforms existing models in both accuracy and inference time.
arXiv Detail & Related papers (2024-11-02T07:06:53Z) - Leveraging Model-based Trees as Interpretable Surrogate Models for Model
Distillation [3.5437916561263694]
Surrogate models play a crucial role in retrospectively interpreting complex and powerful black box machine learning models.
This paper focuses on using model-based trees as surrogate models which partition the feature space into interpretable regions via decision rules.
Four model-based tree algorithms, namely SLIM, GUIDE, MOB, and CTree, are compared regarding their ability to generate such surrogate models.
arXiv Detail & Related papers (2023-10-04T19:06:52Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Deep incremental learning models for financial temporal tabular datasets
with distribution shifts [0.9790236766474201]
The framework uses a simple basic building block (decision trees) to build self-similar models of any required complexity.
We demonstrate our scheme using XGBoost models trained on the Numerai dataset and show that a two layer deep ensemble of XGBoost models over different model snapshots delivers high quality predictions.
arXiv Detail & Related papers (2023-03-14T14:10:37Z) - Low-Rank Constraints for Fast Inference in Structured Models [110.38427965904266]
This work demonstrates a simple approach to reduce the computational and memory complexity of a large class of structured models.
Experiments with neural parameterized structured models for language modeling, polyphonic music modeling, unsupervised grammar induction, and video modeling show that our approach matches the accuracy of standard models at large state spaces.
arXiv Detail & Related papers (2022-01-08T00:47:50Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Improving Label Quality by Jointly Modeling Items and Annotators [68.8204255655161]
We propose a fully Bayesian framework for learning ground truth labels from noisy annotators.
Our framework ensures scalability by factoring a generative, Bayesian soft clustering model over label distributions into the classic David and Skene joint annotator-data model.
arXiv Detail & Related papers (2021-06-20T02:15:20Z) - Context-tree weighting for real-valued time series: Bayesian inference
with hierarchical mixture models [8.37609145576126]
A general, hierarchical Bayesian modelling framework is developed for building mixture models for times series.
This development is based, in part, on the use of context trees, and it includes a collection of effective algorithmic tools for learning and inference.
The utility of the general framework is illustrated in detail when autoregressive (AR) models are used at the bottom level, resulting in a nonlinear AR mixture model.
arXiv Detail & Related papers (2021-06-06T03:46:49Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv Detail & Related papers (2021-02-02T06:15:15Z) - Model Embedding Model-Based Reinforcement Learning [4.566180616886624]
Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL)
Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias.
We propose a simple and elegant model-embedding model-based reinforcement learning (MEMB) algorithm in the framework of the probabilistic reinforcement learning.
arXiv Detail & Related papers (2020-06-16T15:10:28Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.