Sharp Spectral Rates for Koopman Operator Learning
- URL: http://arxiv.org/abs/2302.02004v4
- Date: Wed, 8 Nov 2023 10:04:00 GMT
- Title: Sharp Spectral Rates for Koopman Operator Learning
- Authors: Vladimir Kostic, Karim Lounici, Pietro Novelli, Massimiliano Pontil
- Abstract summary: We present for the first time non-asymptotic learning bounds for the Koopman eigenvalues and eigenfunctions.
Our results shed new light on the emergence of spurious eigenvalues.
- Score: 27.820383937933034
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nonlinear dynamical systems can be handily described by the associated
Koopman operator, whose action evolves every observable of the system forward
in time. Learning the Koopman operator and its spectral decomposition from data
is enabled by a number of algorithms. In this work we present for the first
time non-asymptotic learning bounds for the Koopman eigenvalues and
eigenfunctions. We focus on time-reversal-invariant stochastic dynamical
systems, including the important example of Langevin dynamics. We analyze two
popular estimators: Extended Dynamic Mode Decomposition (EDMD) and Reduced Rank
Regression (RRR). Our results critically hinge on novel {minimax} estimation
bounds for the operator norm error, that may be of independent interest. Our
spectral learning bounds are driven by the simultaneous control of the operator
norm error and a novel metric distortion functional of the estimated
eigenfunctions. The bounds indicates that both EDMD and RRR have similar
variance, but EDMD suffers from a larger bias which might be detrimental to its
learning rate. Our results shed new light on the emergence of spurious
eigenvalues, an issue which is well known empirically. Numerical experiments
illustrate the implications of the bounds in practice.
Related papers
- Beyond expectations: Residual Dynamic Mode Decomposition and Variance
for Stochastic Dynamical Systems [8.259767785187805]
Dynamic Mode Decomposition (DMD) is the poster child of projection-based methods.
We introduce the concept of variance-pseudospectra to gauge statistical coherency.
Our study concludes with practical applications using both simulated and experimental data.
arXiv Detail & Related papers (2023-08-21T13:05:12Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Koopman Kernel Regression [6.116741319526748]
We show that Koopman operator theory offers a beneficial paradigm for characterizing forecasts via linear time-invariant (LTI) ODEs.
We derive a universal Koopman-invariant kernel reproducing Hilbert space (RKHS) that solely spans transformations into LTI dynamical systems.
Our experiments demonstrate superior forecasting performance compared to Koopman operator and sequential data predictors.
arXiv Detail & Related papers (2023-05-25T16:22:22Z) - Temporal Difference Learning with Compressed Updates: Error-Feedback meets Reinforcement Learning [47.904127007515925]
We study a variant of the classical temporal difference (TD) learning algorithm with a perturbed update direction.
We prove that compressed TD algorithms, coupled with an error-feedback mechanism used widely in optimization, exhibit the same non-asymptotic approximation guarantees as their counterparts.
Notably, these are the first finite-time results in RL that account for general compression operators and error-feedback in tandem with linear function approximation and Markovian sampling.
arXiv Detail & Related papers (2023-01-03T04:09:38Z) - Learning Dynamical Systems via Koopman Operator Regression in
Reproducing Kernel Hilbert Spaces [52.35063796758121]
We formalize a framework to learn the Koopman operator from finite data trajectories of the dynamical system.
We link the risk with the estimation of the spectral decomposition of the Koopman operator.
Our results suggest RRR might be beneficial over other widely used estimators.
arXiv Detail & Related papers (2022-05-27T14:57:48Z) - Error-in-variables modelling for operator learning [0.35880734696551125]
Failure to account for noisy independent variables can lead to biased parameter estimates.
In this work, we derive an analogue of attenuation bias for linear operator regression with white noise in both the independent and dependent variables.
We propose error-in-variables (EiV) models for two operator regression methods, MOR-Physics and DeepONet, and demonstrate that these new models reduce bias in the presence of noisy independent variables.
arXiv Detail & Related papers (2022-04-22T19:54:34Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - On the Generalization of Stochastic Gradient Descent with Momentum [58.900860437254885]
We first show that there exists a convex loss function for which algorithmic stability fails to establish generalization guarantees.
For smooth Lipschitz loss functions, we analyze a modified momentum-based update rule, and show that it admits an upper-bound on the generalization error.
For the special case of strongly convex loss functions, we find a range of momentum such that multiple epochs of standard SGDM, as a special form of SGDEM, also generalizes.
arXiv Detail & Related papers (2021-02-26T18:58:29Z) - On Learning Rates and Schr\"odinger Operators [105.32118775014015]
We present a general theoretical analysis of the effect of the learning rate.
We find that the learning rate tends to zero for a broad non- neural class functions.
arXiv Detail & Related papers (2020-04-15T09:52:37Z) - Sparsity-promoting algorithms for the discovery of informative Koopman
invariant subspaces [0.0]
We propose a framework based on multi-task feature learning to extract most informative Koopman in subspace.
We show a relationship between the present algorithm, sparsity DMD, and an empirical criterion promoting KDMD.
arXiv Detail & Related papers (2020-02-25T03:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.