Consistent Long-Term Forecasting of Ergodic Dynamical Systems
- URL: http://arxiv.org/abs/2312.13426v1
- Date: Wed, 20 Dec 2023 21:12:19 GMT
- Title: Consistent Long-Term Forecasting of Ergodic Dynamical Systems
- Authors: Prune Inzerilli, Vladimir Kostic, Karim Lounici, Pietro Novelli,
Massimiliano Pontil
- Abstract summary: We study the evolution of distributions under the action of an ergodic dynamical system.
By employing tools from Koopman and transfer operator theory one can evolve any initial distribution of the state forward in time.
We introduce a learning paradigm that neatly combines classical techniques of eigenvalue deflation from operator theory and feature centering from statistics.
- Score: 25.46655692714755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the evolution of distributions under the action of an ergodic
dynamical system, which may be stochastic in nature. By employing tools from
Koopman and transfer operator theory one can evolve any initial distribution of
the state forward in time, and we investigate how estimators of these operators
perform on long-term forecasting. Motivated by the observation that standard
estimators may fail at this task, we introduce a learning paradigm that neatly
combines classical techniques of eigenvalue deflation from operator theory and
feature centering from statistics. This paradigm applies to any operator
estimator based on empirical risk minimization, making them satisfy learning
bounds which hold uniformly on the entire trajectory of future distributions,
and abide to the conservation of mass for each of the forecasted distributions.
Numerical experiments illustrates the advantages of our approach in practice.
Related papers
- Koopman Ensembles for Probabilistic Time Series Forecasting [6.699751896019971]
We show that ensembles of independently trained models are highly overconfident and that using a training criterion that explicitly encourages the members to produce predictions with high inter-model variances greatly improves the uncertainty of the ensembles.
arXiv Detail & Related papers (2024-03-11T14:29:56Z) - Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - Estimating Koopman operators with sketching to provably learn large
scale dynamical systems [37.18243295790146]
The theory of Koopman operators allows to deploy non-parametric machine learning algorithms to predict and analyze complex dynamical systems.
We boost the efficiency of different kernel-based Koopman operator estimators using random projections.
We establish non error bounds giving a sharp characterization of the trade-offs between statistical learning rates and computational efficiency.
arXiv Detail & Related papers (2023-06-07T15:30:03Z) - Koopman Kernel Regression [6.116741319526748]
We show that Koopman operator theory offers a beneficial paradigm for characterizing forecasts via linear time-invariant (LTI) ODEs.
We derive a universal Koopman-invariant kernel reproducing Hilbert space (RKHS) that solely spans transformations into LTI dynamical systems.
Our experiments demonstrate superior forecasting performance compared to Koopman operator and sequential data predictors.
arXiv Detail & Related papers (2023-05-25T16:22:22Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Learning Dynamical Systems via Koopman Operator Regression in
Reproducing Kernel Hilbert Spaces [52.35063796758121]
We formalize a framework to learn the Koopman operator from finite data trajectories of the dynamical system.
We link the risk with the estimation of the spectral decomposition of the Koopman operator.
Our results suggest RRR might be beneficial over other widely used estimators.
arXiv Detail & Related papers (2022-05-27T14:57:48Z) - Machine-Learned Prediction Equilibrium for Dynamic Traffic Assignment [3.704832909610284]
We study a dynamic traffic assignment model, where agents base their instantaneous routing decisions on real-time delay predictions.
We formulate a mathematically concise model and derive properties of the predictors that ensure a dynamic prediction equilibrium exists.
arXiv Detail & Related papers (2021-09-14T14:27:09Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Generalization Properties of Optimal Transport GANs with Latent
Distribution Learning [52.25145141639159]
We study how the interplay between the latent distribution and the complexity of the pushforward map affects performance.
Motivated by our analysis, we advocate learning the latent distribution as well as the pushforward map within the GAN paradigm.
arXiv Detail & Related papers (2020-07-29T07:31:33Z) - Video Prediction via Example Guidance [156.08546987158616]
In video prediction tasks, one major challenge is to capture the multi-modal nature of future contents and dynamics.
In this work, we propose a simple yet effective framework that can efficiently predict plausible future states.
arXiv Detail & Related papers (2020-07-03T14:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.