EgPDE-Net: Building Continuous Neural Networks for Time Series
Prediction with Exogenous Variables
- URL: http://arxiv.org/abs/2208.01913v2
- Date: Mon, 25 Sep 2023 08:08:04 GMT
- Title: EgPDE-Net: Building Continuous Neural Networks for Time Series
Prediction with Exogenous Variables
- Authors: Penglei Gao, Xi Yang, Rui Zhang, Ping Guo, John Y. Goulermas, and
Kaizhu Huang
- Abstract summary: Inter-series correlation and time dependence among variables are rarely considered in the present continuous methods.
We propose a continuous-time model for arbitrary-step prediction to learn an unknown PDE system.
- Score: 22.145726318053526
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: While exogenous variables have a major impact on performance improvement in
time series analysis, inter-series correlation and time dependence among them
are rarely considered in the present continuous methods. The dynamical systems
of multivariate time series could be modelled with complex unknown partial
differential equations (PDEs) which play a prominent role in many disciplines
of science and engineering. In this paper, we propose a continuous-time model
for arbitrary-step prediction to learn an unknown PDE system in multivariate
time series whose governing equations are parameterised by self-attention and
gated recurrent neural networks. The proposed model,
\underline{E}xogenous-\underline{g}uided \underline{P}artial
\underline{D}ifferential \underline{E}quation Network (EgPDE-Net), takes
account of the relationships among the exogenous variables and their effects on
the target series. Importantly, the model can be reduced into a regularised
ordinary differential equation (ODE) problem with special designed
regularisation guidance, which makes the PDE problem tractable to obtain
numerical solutions and feasible to predict multiple future values of the
target series at arbitrary time points. Extensive experiments demonstrate that
our proposed model could achieve competitive accuracy over strong baselines: on
average, it outperforms the best baseline by reducing $9.85\%$ on RMSE and
$13.98\%$ on MAE for arbitrary-step prediction.
Related papers
- Trajectory Flow Matching with Applications to Clinical Time Series Modeling [77.58277281319253]
Trajectory Flow Matching (TFM) trains a Neural SDE in a simulation-free manner, bypassing backpropagation through the dynamics.
We demonstrate improved performance on three clinical time series datasets in terms of absolute performance and uncertainty prediction.
arXiv Detail & Related papers (2024-10-28T15:54:50Z) - PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from
the perspective of partial differential equations [49.80959046861793]
We present PDETime, a novel LMTF model inspired by the principles of Neural PDE solvers.
Our experimentation across seven diversetemporal real-world LMTF datasets reveals that PDETime adapts effectively to the intrinsic nature of the data.
arXiv Detail & Related papers (2024-02-25T17:39:44Z) - PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers [40.097474800631]
Time-dependent partial differential equations (PDEs) are ubiquitous in science and engineering.
Deep neural network based surrogates have gained increased interest.
arXiv Detail & Related papers (2023-08-10T17:53:05Z) - Learning PDE Solution Operator for Continuous Modeling of Time-Series [1.39661494747879]
This work presents a partial differential equation (PDE) based framework which improves the dynamics modeling capability.
We propose a neural operator that can handle time continuously without requiring iterative operations or specific grids of temporal discretization.
Our framework opens up a new way for a continuous representation of neural networks that can be readily adopted for real-world applications.
arXiv Detail & Related papers (2023-02-02T03:47:52Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - Multivariate Time Series Forecasting with Dynamic Graph Neural ODEs [65.18780403244178]
We propose a continuous model to forecast Multivariate Time series with dynamic Graph neural Ordinary Differential Equations (MTGODE)
Specifically, we first abstract multivariate time series into dynamic graphs with time-evolving node features and unknown graph structures.
Then, we design and solve a neural ODE to complement missing graph topologies and unify both spatial and temporal message passing.
arXiv Detail & Related papers (2022-02-17T02:17:31Z) - Time Series Forecasting with Ensembled Stochastic Differential Equations
Driven by L\'evy Noise [2.3076895420652965]
We use a collection of SDEs equipped with neural networks to predict long-term trend of noisy time series.
Our contributions are, first, we use the phase space reconstruction method to extract intrinsic dimension of the time series data.
Second, we explore SDEs driven by $alpha$-stable L'evy motion to model the time series data and solve the problem through neural network approximation.
arXiv Detail & Related papers (2021-11-25T16:49:01Z) - Stochastically forced ensemble dynamic mode decomposition for
forecasting and analysis of near-periodic systems [65.44033635330604]
We introduce a novel load forecasting method in which observed dynamics are modeled as a forced linear system.
We show that its use of intrinsic linear dynamics offers a number of desirable properties in terms of interpretability and parsimony.
Results are presented for a test case using load data from an electrical grid.
arXiv Detail & Related papers (2020-10-08T20:25:52Z) - Learning continuous-time PDEs from sparse data with graph neural
networks [10.259254824702555]
We propose a continuous-time differential model for dynamical systems whose governing equations are parameterized by message passing graph neural networks.
We demonstrate the model's ability to work with unstructured grids, arbitrary time steps, and noisy observations.
We compare our method with existing approaches on several well-known physical systems that involve first and higher-order PDEs with state-of-the-art predictive performance.
arXiv Detail & Related papers (2020-06-16T07:15:40Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.