pTSE: A Multi-model Ensemble Method for Probabilistic Time Series
Forecasting
- URL: http://arxiv.org/abs/2305.11304v2
- Date: Thu, 31 Aug 2023 02:10:40 GMT
- Title: pTSE: A Multi-model Ensemble Method for Probabilistic Time Series
Forecasting
- Authors: Yunyi Zhou, Zhixuan Chu, Yijia Ruan, Ge Jin, Yuchen Huang, Sheng Li
- Abstract summary: pTSE is a multi-model distribution ensemble method for probabilistic forecasting based on Hidden Markov Model (HMM)
We provide a complete theoretical analysis of pTSE to prove that the empirical distribution of time series subject to an HMM will converge to the stationary distribution almost surely.
Experiments on benchmarks show the superiority of pTSE overall member models and competitive ensemble methods.
- Score: 10.441994923253596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various probabilistic time series forecasting models have sprung up and shown
remarkably good performance. However, the choice of model highly relies on the
characteristics of the input time series and the fixed distribution that the
model is based on. Due to the fact that the probability distributions cannot be
averaged over different models straightforwardly, the current time series model
ensemble methods cannot be directly applied to improve the robustness and
accuracy of forecasting. To address this issue, we propose pTSE, a multi-model
distribution ensemble method for probabilistic forecasting based on Hidden
Markov Model (HMM). pTSE only takes off-the-shelf outputs from member models
without requiring further information about each model. Besides, we provide a
complete theoretical analysis of pTSE to prove that the empirical distribution
of time series subject to an HMM will converge to the stationary distribution
almost surely. Experiments on benchmarks show the superiority of pTSE overall
member models and competitive ensemble methods.
Related papers
- Deep Non-Parametric Time Series Forecaster [19.800783133682955]
The proposed approach does not assume any parametric form for the predictive distribution and instead generates predictions by sampling from the empirical distribution according to a tunable strategy.
We develop a global version of the proposed method that automatically learns the sampling strategy by exploiting the information across multiple related time series.
arXiv Detail & Related papers (2023-12-22T12:46:30Z) - Attention-Based Ensemble Pooling for Time Series Forecasting [55.2480439325792]
We propose a method for pooling that performs a weighted average over candidate model forecasts.
We test this method on two time-series forecasting problems: multi-step forecasting of the dynamics of the non-stationary Lorenz 63 equation, and one-step forecasting of the weekly incident deaths due to COVID-19.
arXiv Detail & Related papers (2023-10-24T22:59:56Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2023-10-17T20:30:16Z) - Predict, Refine, Synthesize: Self-Guiding Diffusion Models for
Probabilistic Time Series Forecasting [10.491628898499684]
We propose TSDiff, an unconditionally-trained diffusion model for time series.
Our proposed self-guidance mechanism enables conditioning TSDiff for downstream tasks during inference, without requiring auxiliary networks or altering the training procedure.
We demonstrate the effectiveness of our method on three different time series tasks: forecasting, refinement, and synthetic data generation.
arXiv Detail & Related papers (2023-07-21T10:56:36Z) - Scalable Dynamic Mixture Model with Full Covariance for Probabilistic
Traffic Forecasting [16.04029885574568]
We propose a dynamic mixture of zero-mean Gaussian distributions for the time-varying error process.
The proposed method can be seamlessly integrated into existing deep-learning frameworks with only a few additional parameters to be learned.
We evaluate the proposed method on a traffic speed forecasting task and find that our method not only improves model horizons but also provides interpretabletemporal correlation structures.
arXiv Detail & Related papers (2022-12-10T22:50:00Z) - Model ensemble instead of prompt fusion: a sample-specific knowledge
transfer method for few-shot prompt tuning [85.55727213502402]
We focus on improving the few-shot performance of prompt tuning by transferring knowledge from soft prompts of source tasks.
We propose Sample-specific Ensemble of Source Models (SESoM)
SESoM learns to adjust the contribution of each source model for each target sample separately when ensembling source model outputs.
arXiv Detail & Related papers (2022-10-23T01:33:16Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2022-06-16T06:13:53Z) - Out of Distribution Detection, Generalization, and Robustness Triangle
with Maximum Probability Theorem [2.0654955576087084]
MPT uses the probability distribution that the models assume on random variables to provide an upper bound on probability of the model.
We apply MPT to challenging out-of-distribution (OOD) detection problems in computer vision by incorporating MPT as a regularization scheme in training of CNNs and their energy based variants.
arXiv Detail & Related papers (2022-03-23T02:42:08Z) - Sampling from Arbitrary Functions via PSD Models [55.41644538483948]
We take a two-step approach by first modeling the probability distribution and then sampling from that model.
We show that these models can approximate a large class of densities concisely using few evaluations, and present a simple algorithm to effectively sample from these models.
arXiv Detail & Related papers (2021-10-20T12:25:22Z) - PSD Representations for Effective Probability Models [117.35298398434628]
We show that a recently proposed class of positive semi-definite (PSD) models for non-negative functions is particularly suited to this end.
We characterize both approximation and generalization capabilities of PSD models, showing that they enjoy strong theoretical guarantees.
Our results open the way to applications of PSD models to density estimation, decision theory and inference.
arXiv Detail & Related papers (2021-06-30T15:13:39Z) - A Worrying Analysis of Probabilistic Time-series Models for Sales
Forecasting [10.690379201437015]
Probabilistic time-series models become popular in the forecasting field as they help to make optimal decisions under uncertainty.
We analyze the performance of three prominent probabilistic time-series models for sales forecasting.
arXiv Detail & Related papers (2020-11-21T03:31:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.