BayOTIDE: Bayesian Online Multivariate Time series Imputation with functional decomposition
- URL: http://arxiv.org/abs/2308.14906v3
- Date: Thu, 30 May 2024 18:50:32 GMT
- Title: BayOTIDE: Bayesian Online Multivariate Time series Imputation with functional decomposition
- Authors: Shikai Fang, Qingsong Wen, Yingtao Luo, Shandian Zhe, Liang Sun,
- Abstract summary: In real-world scenarios like traffic and energy, massive time-series data with missing values and noises are widely observed, even sampled irregularly.
While many imputation methods have been proposed, most of them work with a local horizon.
Almost all methods assume the observations are sampled at regular time stamps, and fail to handle complex irregular sampled time series.
- Score: 31.096125530322933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world scenarios like traffic and energy, massive time-series data with missing values and noises are widely observed, even sampled irregularly. While many imputation methods have been proposed, most of them work with a local horizon, which means models are trained by splitting the long sequence into batches of fit-sized patches. This local horizon can make models ignore global trends or periodic patterns. More importantly, almost all methods assume the observations are sampled at regular time stamps, and fail to handle complex irregular sampled time series arising from different applications. Thirdly, most existing methods are learned in an offline manner. Thus, it is not suitable for many applications with fast-arriving streaming data. To overcome these limitations, we propose BayOTIDE: Bayesian Online Multivariate Time series Imputation with functional decomposition. We treat the multivariate time series as the weighted combination of groups of low-rank temporal factors with different patterns. We apply a group of Gaussian Processes (GPs) with different kernels as functional priors to fit the factors. For computational efficiency, we further convert the GPs into a state-space prior by constructing an equivalent stochastic differential equation (SDE), and developing a scalable algorithm for online inference. The proposed method can not only handle imputation over arbitrary time stamps, but also offer uncertainty quantification and interpretability for the downstream application. We evaluate our method on both synthetic and real-world datasets.We release the code at {https://github.com/xuangu-fang/BayOTIDE}
Related papers
- Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Continuous-time Autoencoders for Regular and Irregular Time Series Imputation [21.25279298572273]
Time series imputation is one of the most fundamental tasks for time series.
Recent self-attention-based methods show the state-of-the-art imputation performance.
It has been overlooked for a long time to design an imputation method based on continuous-time recurrent neural networks.
arXiv Detail & Related papers (2023-12-27T14:13:42Z) - Deep Ensembles Meets Quantile Regression: Uncertainty-aware Imputation
for Time Series [49.992908221544624]
Time series data often exhibit numerous missing values, which is the time series imputation task.
Previous deep learning methods have been shown to be effective for time series imputation.
We propose a non-generative time series imputation method that produces accurate imputations with inherent uncertainty.
arXiv Detail & Related papers (2023-12-03T05:52:30Z) - Streaming Factor Trajectory Learning for Temporal Tensor Decomposition [33.18423605559094]
We propose Streaming Factor Trajectory Learning for temporal tensor decomposition.
We use Gaussian processes (GPs) to model the trajectory of factors so as to flexibly estimate their temporal evolution.
We have shown the advantage of SFTL in both synthetic tasks and real-world applications.
arXiv Detail & Related papers (2023-10-25T21:58:52Z) - Compatible Transformer for Irregularly Sampled Multivariate Time Series [75.79309862085303]
We propose a transformer-based encoder to achieve comprehensive temporal-interaction feature learning for each individual sample.
We conduct extensive experiments on 3 real-world datasets and validate that the proposed CoFormer significantly and consistently outperforms existing methods.
arXiv Detail & Related papers (2023-10-17T06:29:09Z) - Continuous-time convolutions model of event sequences [53.36665135225617]
Huge samples of event sequences data occur in various domains, including e-commerce, healthcare, and finance.
The amount of available data and the length of event sequences per client are typically large, thus it requires long-term modelling.
We propose the COTIC method based on a continuous convolution neural network suitable for non-uniform occurrence of events in time.
arXiv Detail & Related papers (2023-02-13T10:34:51Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Sparse Algorithms for Markovian Gaussian Processes [18.999495374836584]
Sparse Markovian processes combine the use of inducing variables with efficient Kalman filter-likes recursion.
We derive a general site-based approach to approximate the non-Gaussian likelihood with local Gaussian terms, called sites.
Our approach results in a suite of novel sparse extensions to algorithms from both the machine learning and signal processing, including variational inference, expectation propagation, and the classical nonlinear Kalman smoothers.
The derived methods are suited to literature-temporal data, where the model has separate inducing points in both time and space.
arXiv Detail & Related papers (2021-03-19T09:50:53Z) - Fast Variational Learning in State-Space Gaussian Process Models [29.630197272150003]
We build upon an existing method called conjugate-computation variational inference.
We provide an efficient JAX implementation which exploits just-in-time compilation.
Our approach leads to fast and stable variational inference in state-space GP models that can be scaled to time series with millions of data points.
arXiv Detail & Related papers (2020-07-09T12:06:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.