Hybrid Variational Autoencoder for Time Series Forecasting
        - URL: http://arxiv.org/abs/2303.07048v1
 - Date: Mon, 13 Mar 2023 12:13:28 GMT
 - Title: Hybrid Variational Autoencoder for Time Series Forecasting
 - Authors: Borui Cai, Shuiqiao Yang, Longxiang Gao and Yong Xiang
 - Abstract summary: Variational autoencoders (VAE) are powerful generative models that learn the latent representations of input data as random variables.
We propose a novel hybrid variational autoencoder (HyVAE) to integrate the learning of local patterns and temporal dynamics by variational inference for time series forecasting.
 - Score: 12.644797358419618
 - License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
 - Abstract:   Variational autoencoders (VAE) are powerful generative models that learn the
latent representations of input data as random variables. Recent studies show
that VAE can flexibly learn the complex temporal dynamics of time series and
achieve more promising forecasting results than deterministic models. However,
a major limitation of existing works is that they fail to jointly learn the
local patterns (e.g., seasonality and trend) and temporal dynamics of time
series for forecasting. Accordingly, we propose a novel hybrid variational
autoencoder (HyVAE) to integrate the learning of local patterns and temporal
dynamics by variational inference for time series forecasting. Experimental
results on four real-world datasets show that the proposed HyVAE achieves
better forecasting results than various counterpart methods, as well as two
HyVAE variants that only learn the local patterns or temporal dynamics of time
series, respectively.
 
       
      
        Related papers
        - Breaking Silos: Adaptive Model Fusion Unlocks Better Time Series   Forecasting [64.45587649141842]
Time-series forecasting plays a critical role in many real-world applications.<n>No single model consistently outperforms others across different test samples, but instead (ii) each model excels in specific cases.<n>We introduce TimeFuse, a framework for collective time-series forecasting with sample-level adaptive fusion of heterogeneous models.
arXiv  Detail & Related papers  (2025-05-24T00:45:07Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a generative Transformer for unified time series forecasting.
Timer-XL achieves state-of-the-art performance across challenging forecasting benchmarks through a unified approach.
arXiv  Detail & Related papers  (2024-10-07T07:27:39Z) - Stochastic Diffusion: A Diffusion Probabilistic Model for Stochastic   Time Series Forecasting [8.232475807691255]
We propose a novel Diffusion (StochDiff) model which learns data-driven prior knowledge at each time step.
The learnt prior knowledge helps the model to capture complex temporal dynamics and the inherent uncertainty of the data.
arXiv  Detail & Related papers  (2024-06-05T00:13:38Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
  the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv  Detail & Related papers  (2024-02-25T17:39:44Z) - Unified Training of Universal Time Series Forecasting Transformers [104.56318980466742]
We present a Masked-based Universal Time Series Forecasting Transformer (Moirai)
Moirai is trained on our newly introduced Large-scale Open Time Series Archive (LOTSA) featuring over 27B observations across nine domains.
Moirai achieves competitive or superior performance as a zero-shot forecaster when compared to full-shot models.
arXiv  Detail & Related papers  (2024-02-04T20:00:45Z) - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series   Forecasting [24.834846119163885]
We propose a novel framework, TEMPO, that can effectively learn time series representations.
TEMPO expands the capability for dynamically modeling real-world temporal phenomena from data within diverse domains.
arXiv  Detail & Related papers  (2023-10-08T00:02:25Z) - Generative Modeling of Regular and Irregular Time Series Data via   Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv  Detail & Related papers  (2023-10-04T07:14:43Z) - Time Series Continuous Modeling for Imputation and Forecasting with   Implicit Neural Representations [15.797295258800638]
We introduce a novel modeling approach for time series imputation and forecasting, tailored to address the challenges often encountered in real-world data.
Our method relies on a continuous-time-dependent model of the series' evolution dynamics.
A modulation mechanism, driven by a meta-learning algorithm, allows adaptation to unseen samples and extrapolation beyond observed time-windows.
arXiv  Detail & Related papers  (2023-06-09T13:20:04Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
  Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv  Detail & Related papers  (2023-01-08T12:20:46Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv  Detail & Related papers  (2022-05-16T07:53:42Z) - Anomaly Detection of Time Series with Smoothness-Inducing Sequential
  Variational Auto-Encoder [59.69303945834122]
We present a Smoothness-Inducing Sequential Variational Auto-Encoder (SISVAE) model for robust estimation and anomaly detection of time series.
Our model parameterizes mean and variance for each time-stamp with flexible neural networks.
We show the effectiveness of our model on both synthetic datasets and public real-world benchmarks.
arXiv  Detail & Related papers  (2021-02-02T06:15:15Z) - Global Models for Time Series Forecasting: A Simulation Study [2.580765958706854]
We simulate time series from simple data generating processes (DGP), such as Auto Regressive (AR) and Seasonal AR, to complex DGPs, such as Chaotic Logistic Map, Self-Exciting Threshold Auto-Regressive, and Mackey-Glass equations.
The lengths and the number of series in the dataset are varied in different scenarios.
We perform experiments on these datasets using global forecasting models including Recurrent Neural Networks (RNN), Feed-Forward Neural Networks, Pooled Regression (PR) models, and Light Gradient Boosting Models (LGBM)
arXiv  Detail & Related papers  (2020-12-23T04:45:52Z) - Deep Transformer Models for Time Series Forecasting: The Influenza
  Prevalence Case [2.997238772148965]
Time series data are prevalent in many scientific and engineering disciplines.
We present a new approach to time series forecasting using Transformer-based machine learning models.
We show that the forecasting results produced by our approach are favorably comparable to the state-of-the-art.
arXiv  Detail & Related papers  (2020-01-23T00:22:22Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.