Learning dynamical systems from data: A simple cross-validation perspective, part III: Irregularly-Sampled Time Series
- URL: http://arxiv.org/abs/2111.13037v2
- Date: Thu, 03 Oct 2024 21:30:36 GMT
- Title: Learning dynamical systems from data: A simple cross-validation perspective, part III: Irregularly-Sampled Time Series
- Authors: Jonghyeon Lee, Edward De Brouwer, Boumediene Hamzi, Houman Owhadi,
- Abstract summary: A simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a kernel.
Despite its previous successes, this strategy breaks down when the observed time series is not regularly sampled in time.
We propose to address this problem by directly approxing the vector field of the dynamical system by incorporating time differences in the (KF) data-adapted kernels.
- Score: 8.918419734720613
- License:
- Abstract: A simple and interpretable way to learn a dynamical system from data is to interpolate its vector-field with a kernel. In particular, this strategy is highly efficient (both in terms of accuracy and complexity) when the kernel is data-adapted using Kernel Flows (KF)\cite{Owhadi19} (which uses gradient-based optimization to learn a kernel based on the premise that a kernel is good if there is no significant loss in accuracy if half of the data is used for interpolation). Despite its previous successes, this strategy (based on interpolating the vector field driving the dynamical system) breaks down when the observed time series is not regularly sampled in time. In this work, we propose to address this problem by directly approximating the vector field of the dynamical system by incorporating time differences between observations in the (KF) data-adapted kernels. We compare our approach with the classical one over different benchmark dynamical systems and show that it significantly improves the forecasting accuracy while remaining simple, fast, and robust.
Related papers
- Kernel Sum of Squares for Data Adapted Kernel Learning of Dynamical Systems from Data: A global optimization approach [0.19999259391104385]
This paper examines the application of the Kernel Sum of Squares (KSOS) method for enhancing kernel learning from data.
Traditional kernel-based methods frequently struggle with selecting optimal base kernels and parameter tuning.
KSOS mitigates these issues by leveraging a global optimization framework with kernel-based surrogate functions.
arXiv Detail & Related papers (2024-08-12T19:32:28Z) - Graph Spatiotemporal Process for Multivariate Time Series Anomaly
Detection with Missing Values [67.76168547245237]
We introduce a novel framework called GST-Pro, which utilizes a graphtemporal process and anomaly scorer to detect anomalies.
Our experimental results show that the GST-Pro method can effectively detect anomalies in time series data and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-01-11T10:10:16Z) - Learning Dynamical Systems from Data: A Simple Cross-Validation
Perspective, Part V: Sparse Kernel Flows for 132 Chaotic Dynamical Systems [5.124035247669094]
We introduce the method of emphSparse Kernel Flows in order to learn the best'' kernel by starting from a large dictionary of kernels.
We apply this approach to a library of 132 chaotic systems.
arXiv Detail & Related papers (2023-01-24T21:47:33Z) - Hankel-structured Tensor Robust PCA for Multivariate Traffic Time Series
Anomaly Detection [9.067182100565695]
This study proposes a Hankel-structured tensor version of RPCA for anomaly detection in spatial data.
We decompose the corrupted matrix into a low-rank Hankel tensor and a sparse matrix.
We evaluate the method by synthetic data and passenger flow time series.
arXiv Detail & Related papers (2021-10-08T19:35:39Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - KalmanNet: Neural Network Aided Kalman Filtering for Partially Known
Dynamics [84.18625250574853]
We present KalmanNet, a real-time state estimator that learns from data to carry out Kalman filtering under non-linear dynamics.
We numerically demonstrate that KalmanNet overcomes nonlinearities and model mismatch, outperforming classic filtering methods.
arXiv Detail & Related papers (2021-07-21T12:26:46Z) - A Temporal Kernel Approach for Deep Learning with Continuous-time
Information [18.204325860752768]
Sequential deep learning models such as RNN, causal CNN and attention mechanism do not readily consume continuous-time information.
Discretizing the temporal data, as we show, causes inconsistency even for simple continuous-time processes.
We provide a principled way to characterize continuous-time systems using deep learning tools.
arXiv Detail & Related papers (2021-03-28T20:13:53Z) - Learning Compositional Sparse Gaussian Processes with a Shrinkage Prior [26.52863547394537]
We present a novel probabilistic algorithm to learn a kernel composition by handling the sparsity in the kernel selection with Horseshoe prior.
Our model can capture characteristics of time series with significant reductions in computational time and have competitive regression performance on real-world data sets.
arXiv Detail & Related papers (2020-12-21T13:41:15Z) - Federated Doubly Stochastic Kernel Learning for Vertically Partitioned
Data [93.76907759950608]
We propose a doubly kernel learning algorithm for vertically partitioned data.
We show that FDSKL is significantly faster than state-of-the-art federated learning methods when dealing with kernels.
arXiv Detail & Related papers (2020-08-14T05:46:56Z) - Multiple Video Frame Interpolation via Enhanced Deformable Separable
Convolution [67.83074893311218]
Kernel-based methods predict pixels with a single convolution process that convolves source frames with spatially adaptive local kernels.
We propose enhanced deformable separable convolution (EDSC) to estimate not only adaptive kernels, but also offsets, masks and biases.
We show that our method performs favorably against the state-of-the-art methods across a broad range of datasets.
arXiv Detail & Related papers (2020-06-15T01:10:59Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.