PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations
- URL: http://arxiv.org/abs/2402.16913v1
- Date: Sun, 25 Feb 2024 17:39:44 GMT
- Title: PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations
- Authors: Shiyi Qi, Zenglin Xu, Yiduo Li, Liangjian Wen, Qingsong Wen, Qifan
Wang, Yuan Qi
- Abstract summary: We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
- Score: 49.80959046861793
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in deep learning have led to the development of various
models for long-term multivariate time-series forecasting (LMTF), many of which
have shown promising results. Generally, the focus has been on
historical-value-based models, which rely on past observations to predict
future series. Notably, a new trend has emerged with time-index-based models,
offering a more nuanced understanding of the continuous dynamics underlying
time series. Unlike these two types of models that aggregate the information of
spatial domains or temporal domains, in this paper, we consider multivariate
time series as spatiotemporal data regularly sampled from a continuous
dynamical system, which can be represented by partial differential equations
(PDEs), with the spatial domain being fixed. Building on this perspective, we
present PDETime, a novel LMTF model inspired by the principles of Neural PDE
solvers, following the encoding-integration-decoding operations. Our extensive
experimentation across seven diverse real-world LMTF datasets reveals that
PDETime not only adapts effectively to the intrinsic spatiotemporal nature of
the data but also sets new benchmarks, achieving state-of-the-art results
Related papers
- Towards Long-Context Time Series Foundation Models [17.224575072056627]
Time series foundation models have shown impressive performance on a variety of tasks, across a wide range of domains, even in zero-shot settings.
This study bridges the gap by systematically comparing various context expansion techniques from both language and time series domains.
arXiv Detail & Related papers (2024-09-20T14:19:59Z) - Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective [63.60312929416228]
textbftextitAttraos incorporates chaos theory into long-term time series forecasting.
We show that Attraos outperforms various LTSF methods on mainstream datasets and chaotic datasets with only one-twelfth of the parameters compared to PatchTST.
arXiv Detail & Related papers (2024-02-18T05:35:01Z) - TEMPO: Prompt-based Generative Pre-trained Transformer for Time Series Forecasting [24.834846119163885]
We propose a novel framework, TEMPO, that can effectively learn time series representations.
TEMPO expands the capability for dynamically modeling real-world temporal phenomena from data within diverse domains.
arXiv Detail & Related papers (2023-10-08T00:02:25Z) - Anamnesic Neural Differential Equations with Orthogonal Polynomial
Projections [6.345523830122166]
We propose PolyODE, a formulation that enforces long-range memory and preserves a global representation of the underlying dynamical system.
Our construction is backed by favourable theoretical guarantees and we demonstrate that it outperforms previous works in the reconstruction of past and future data.
arXiv Detail & Related papers (2023-03-03T10:49:09Z) - EgPDE-Net: Building Continuous Neural Networks for Time Series
Prediction with Exogenous Variables [22.145726318053526]
Inter-series correlation and time dependence among variables are rarely considered in the present continuous methods.
We propose a continuous-time model for arbitrary-step prediction to learn an unknown PDE system.
arXiv Detail & Related papers (2022-08-03T08:34:31Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs [65.18780403244178]
We propose a continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE)
Specifically, we first abstract multivariate time series into dynamic graphs with time-evolving node features and unknown graph structures.
Then, we design and solve a neural ODE to complement missing graph topologies and unify both spatial and temporal message passing.
arXiv Detail & Related papers (2022-02-17T02:17:31Z) - Time Series Forecasting with Ensembled Stochastic Differential Equations
Driven by L\'evy Noise [2.3076895420652965]
We use a collection of SDEs equipped with neural networks to predict long-term trend of noisy time series.
Our contributions are, first, we use the phase space reconstruction method to extract intrinsic dimension of the time series data.
Second, we explore SDEs driven by $alpha$-stable L'evy motion to model the time series data and solve the problem through neural network approximation.
arXiv Detail & Related papers (2021-11-25T16:49:01Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - Connecting the Dots: Multivariate Time Series Forecasting with Graph
Neural Networks [91.65637773358347]
We propose a general graph neural network framework designed specifically for multivariate time series data.
Our approach automatically extracts the uni-directed relations among variables through a graph learning module.
Our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets.
arXiv Detail & Related papers (2020-05-24T04:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.