Scalable Dynamic Mixture Model with Full Covariance for Probabilistic
Traffic Forecasting
- URL: http://arxiv.org/abs/2212.06653v3
- Date: Sat, 19 Aug 2023 23:12:05 GMT
- Title: Scalable Dynamic Mixture Model with Full Covariance for Probabilistic
Traffic Forecasting
- Authors: Seongjin Choi, Nicolas Saunier, Vincent Zhihao Zheng, Martin
Trepanier, Lijun Sun
- Abstract summary: We propose a dynamic mixture of zero-mean Gaussian distributions for the time-varying error process.
The proposed method can be seamlessly integrated into existing deep-learning frameworks with only a few additional parameters to be learned.
We evaluate the proposed method on a traffic speed forecasting task and find that our method not only improves model horizons but also provides interpretabletemporal correlation structures.
- Score: 16.04029885574568
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning-based multivariate and multistep-ahead traffic forecasting
models are typically trained with the mean squared error (MSE) or mean absolute
error (MAE) as the loss function in a sequence-to-sequence setting, simply
assuming that the errors follow an independent and isotropic Gaussian or
Laplacian distributions. However, such assumptions are often unrealistic for
real-world traffic forecasting tasks, where the probabilistic distribution of
spatiotemporal forecasting is very complex with strong concurrent correlations
across both sensors and forecasting horizons in a time-varying manner. In this
paper, we model the time-varying distribution for the matrix-variate error
process as a dynamic mixture of zero-mean Gaussian distributions. To achieve
efficiency, flexibility, and scalability, we parameterize each mixture
component using a matrix normal distribution and allow the mixture weight to
change and be predictable over time. The proposed method can be seamlessly
integrated into existing deep-learning frameworks with only a few additional
parameters to be learned. We evaluate the performance of the proposed method on
a traffic speed forecasting task and find that our method not only improves
model performance but also provides interpretable spatiotemporal correlation
structures.
Related papers
- Marginalization Consistent Mixture of Separable Flows for Probabilistic Irregular Time Series Forecasting [4.714246221974192]
We develop a novel probabilistic irregular time series forecasting model, Marginalization Consistent Mixtures of Separable Flows (moses)
moses outperforms other state-of-the-art marginalization consistent models, performs on par with ProFITi, but different from ProFITi, guarantee marginalization consistency.
arXiv Detail & Related papers (2024-06-11T13:28:43Z) - Multivariate Probabilistic Time Series Forecasting with Correlated Errors [17.212396544233307]
We present a plug-and-play method that learns the covariance structure of errors over multiple steps for autoregressive models with Gaussian-distributed errors.
The learned covariance matrix can be used to calibrate predictions based on observed residuals.
arXiv Detail & Related papers (2024-02-01T20:27:19Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2023-10-17T20:30:16Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Robust scalable initialization for Bayesian variational inference with
multi-modal Laplace approximations [0.0]
Variational mixtures with full-covariance structures suffer from a quadratic growth due to variational parameters with the number of parameters.
We propose a method for constructing an initial Gaussian model approximation that can be used to warm-start variational inference.
arXiv Detail & Related papers (2023-07-12T19:30:04Z) - Online machine-learning forecast uncertainty estimation for sequential
data assimilation [0.0]
Quantifying forecast uncertainty is a key aspect of state-of-the-art numerical weather prediction and data assimilation systems.
In this work a machine learning method is presented based on convolutional neural networks that estimates the state-dependent forecast uncertainty.
The hybrid data assimilation method shows similar performance to the ensemble Kalman filter outperforming it when the ensembles are relatively small.
arXiv Detail & Related papers (2023-05-12T19:23:21Z) - When Rigidity Hurts: Soft Consistency Regularization for Probabilistic
Hierarchical Time Series Forecasting [69.30930115236228]
Probabilistic hierarchical time-series forecasting is an important variant of time-series forecasting.
Most methods focus on point predictions and do not provide well-calibrated probabilistic forecasts distributions.
We propose PROFHiT, a fully probabilistic hierarchical forecasting model that jointly models forecast distribution of entire hierarchy.
arXiv Detail & Related papers (2022-06-16T06:13:53Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Variational Mixture of Normalizing Flows [0.0]
Deep generative models, such as generative adversarial networks autociteGAN, variational autoencoders autocitevaepaper, and their variants, have seen wide adoption for the task of modelling complex data distributions.
Normalizing flows have overcome this limitation by leveraging the change-of-suchs formula for probability density functions.
The present work overcomes this by using normalizing flows as components in a mixture model and devising an end-to-end training procedure for such a model.
arXiv Detail & Related papers (2020-09-01T17:20:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.