Koopman Neural Forecaster for Time Series with Temporal Distribution
Shifts
- URL: http://arxiv.org/abs/2210.03675v2
- Date: Mon, 10 Oct 2022 16:43:05 GMT
- Title: Koopman Neural Forecaster for Time Series with Temporal Distribution
Shifts
- Authors: Rui Wang, Yihe Dong, Sercan \"O. Arik, Rose Yu
- Abstract summary: We propose a novel deep sequence model based on the Koopman theory for time series forecasting.
Koopman Neural Forecaster (KNF) learns the linear Koopman space and the coefficients of chosen measurement functions.
We demonstrate that KNF achieves the superior performance compared to the alternatives, on multiple time series datasets.
- Score: 26.95428146824254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal distributional shifts, with underlying dynamics changing over time,
frequently occur in real-world time series, and pose a fundamental challenge
for deep neural networks (DNNs). In this paper, we propose a novel deep
sequence model based on the Koopman theory for time series forecasting: Koopman
Neural Forecaster (KNF) that leverages DNNs to learn the linear Koopman space
and the coefficients of chosen measurement functions. KNF imposes appropriate
inductive biases for improved robustness against distributional shifts,
employing both a global operator to learn shared characteristics, and a local
operator to capture changing dynamics, as well as a specially-designed feedback
loop to continuously update the learnt operators over time for rapidly varying
behaviors. To the best of our knowledge, this is the first time that Koopman
theory is applied to real-world chaotic time series without known governing
laws. We demonstrate that KNF achieves the superior performance compared to the
alternatives, on multiple time series datasets that are shown to suffer from
distribution shifts.
Related papers
- Adapting to Length Shift: FlexiLength Network for Trajectory Prediction [53.637837706712794]
Trajectory prediction plays an important role in various applications, including autonomous driving, robotics, and scene understanding.
Existing approaches mainly focus on developing compact neural networks to increase prediction precision on public datasets, typically employing a standardized input duration.
We introduce a general and effective framework, the FlexiLength Network (FLN), to enhance the robustness of existing trajectory prediction against varying observation periods.
arXiv Detail & Related papers (2024-03-31T17:18:57Z) - Generative Modeling of Regular and Irregular Time Series Data via Koopman VAEs [50.25683648762602]
We introduce Koopman VAE, a new generative framework that is based on a novel design for the model prior.
Inspired by Koopman theory, we represent the latent conditional prior dynamics using a linear map.
KoVAE outperforms state-of-the-art GAN and VAE methods across several challenging synthetic and real-world time series generation benchmarks.
arXiv Detail & Related papers (2023-10-04T07:14:43Z) - Time Regularization in Optimal Time Variable Learning [0.4490343701046724]
Recently, optimal time variable learning in deep neural networks (DNNs) was introduced in arXiv:2204.08528.
We extend the concept by introducing a regularization term that directly relates to the time horizon in discrete dynamical systems.
We propose an adaptive pruning approach for Residual Neural Networks (ResNets)
Results are illustrated by applying the proposed concepts to classification tasks on the well known MNIST and Fashion MNIST data sets.
arXiv Detail & Related papers (2023-06-28T11:27:48Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Koopa: Learning Non-stationary Time Series Dynamics with Koopman
Predictors [85.22004745984253]
Real-world time series are characterized by intrinsic non-stationarity that poses a principal challenge for deep forecasting models.
We tackle non-stationary time series with modern Koopman theory that fundamentally considers the underlying time-variant dynamics.
We propose Koopa as a novel Koopman forecaster composed of stackable blocks that learn hierarchical dynamics.
arXiv Detail & Related papers (2023-05-30T07:40:27Z) - Modeling Nonlinear Dynamics in Continuous Time with Inductive Biases on
Decay Rates and/or Frequencies [37.795752939016225]
We propose a neural network-based model for nonlinear dynamics in continuous time that can impose inductive biases on decay rates and frequencies.
We use neural networks to find an appropriate Koopman space, which are trained by minimizing multi-step forecasting and backcasting errors using irregularly sampled time-series data.
arXiv Detail & Related papers (2022-12-26T08:08:43Z) - TO-FLOW: Efficient Continuous Normalizing Flows with Temporal
Optimization adjoint with Moving Speed [12.168241245313164]
Continuous normalizing flows (CNFs) construct invertible mappings between an arbitrary complex distribution and an isotropic Gaussian distribution.
It has not been tractable on large datasets due to the incremental complexity of the neural ODE training.
In this paper, a temporal optimization is proposed by optimizing the evolutionary time for forward propagation of the neural ODE training.
arXiv Detail & Related papers (2022-03-19T14:56:41Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Forecasting Sequential Data using Consistent Koopman Autoencoders [52.209416711500005]
A new class of physics-based methods related to Koopman theory has been introduced, offering an alternative for processing nonlinear dynamical systems.
We propose a novel Consistent Koopman Autoencoder model which, unlike the majority of existing work, leverages the forward and backward dynamics.
Key to our approach is a new analysis which explores the interplay between consistent dynamics and their associated Koopman operators.
arXiv Detail & Related papers (2020-03-04T18:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.