Convergent Methods for Koopman Operators on Reproducing Kernel Hilbert Spaces
- URL: http://arxiv.org/abs/2506.15782v1
- Date: Wed, 18 Jun 2025 18:00:08 GMT
- Title: Convergent Methods for Koopman Operators on Reproducing Kernel Hilbert Spaces
- Authors: Nicolas Boullé, Matthew J. Colbrook, Gustav Conradie,
- Abstract summary: Data-driven spectral analysis of Koopman operators is a powerful tool for understanding numerous real-world dynamical systems.<n>We introduce the first general, provably convergent, data-driven algorithms for computing spectral properties of Koopman and Perron--Frobenius operators on RKHSs.
- Score: 3.58439716487063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven spectral analysis of Koopman operators is a powerful tool for understanding numerous real-world dynamical systems, from neuronal activity to variations in sea surface temperature. The Koopman operator acts on a function space and is most commonly studied on the space of square-integrable functions. However, defining it on a suitable reproducing kernel Hilbert space (RKHS) offers numerous practical advantages, including pointwise predictions with error bounds, improved spectral properties that facilitate computations, and more efficient algorithms, particularly in high dimensions. We introduce the first general, provably convergent, data-driven algorithms for computing spectral properties of Koopman and Perron--Frobenius operators on RKHSs. These methods efficiently compute spectra and pseudospectra with error control and spectral measures while exploiting the RKHS structure to avoid the large-data limits required in the $L^2$ settings. The function space is determined by a user-specified kernel, eliminating the need for quadrature-based sampling as in $L^2$ and enabling greater flexibility with finite, externally provided datasets. Using the Solvability Complexity Index hierarchy, we construct adversarial dynamical systems for these problems to show that no algorithm can succeed in fewer limits, thereby proving the optimality of our algorithms. Notably, this impossibility extends to randomized algorithms and datasets. We demonstrate the effectiveness of our algorithms on challenging, high-dimensional datasets arising from real-world measurements and high-fidelity numerical simulations, including turbulent channel flow, molecular dynamics of a binding protein, Antarctic sea ice concentration, and Northern Hemisphere sea surface height. The algorithms are publicly available in the software package $\texttt{SpecRKHS}$.
Related papers
- From Local Interactions to Global Operators: Scalable Gaussian Process Operator for Physical Systems [7.807210884802377]
We introduce a novel, scalable GPO that capitalizes on sparsity, locality, and structural information through judicious kernel design.<n>We demonstrate that our framework consistently achieves high accuracy across varying discretization scales.
arXiv Detail & Related papers (2025-06-18T22:40:52Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Diffusion-based Semi-supervised Spectral Algorithm for Regression on Manifolds [2.0649432688817444]
We introduce a novel diffusion-based spectral algorithm to tackle regression analysis on high-dimensional data.
Our method uses the local estimation property of heat kernel, offering an adaptive, data-driven approach to overcome this obstacle.
Our algorithm performs in an entirely data-driven manner, operating directly within the intrinsic manifold structure of the data.
arXiv Detail & Related papers (2024-10-18T15:29:04Z) - Truncated Kernel Stochastic Gradient Descent on Spheres [1.4583059436979549]
Inspired by the structure of spherical harmonics, we propose the truncated kernel gradient descent (T- Kernel SGD) algorithm.<n>T- Kernel SGD has a least-square loss function for spherical data fitting.
arXiv Detail & Related papers (2024-10-02T14:09:51Z) - Spectral Algorithms on Manifolds through Diffusion [1.7227952883644062]
We study the convergence performance of spectral algorithms in the Reproducing Kernel Space.
We employ integral operator techniques to derive tight convergence upper bounds concerning generalized norms.
Our study confirms that the spectral algorithms are practically significant in the broader context of high-dimensional approximation.
arXiv Detail & Related papers (2024-03-06T12:43:53Z) - Linear quadratic control of nonlinear systems with Koopman operator learning and the Nyström method [16.0198373552099]
We show how random subspaces can be used to achieve huge computational savings.<n>Our main technical contribution is deriving theoretical guarantees on the effect of the Nystr"om approximation.
arXiv Detail & Related papers (2024-03-05T09:28:40Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Score-based Diffusion Models in Function Space [137.70916238028306]
Diffusion models have recently emerged as a powerful framework for generative modeling.<n>This work introduces a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.<n>We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Rigorous data-driven computation of spectral properties of Koopman
operators for dynamical systems [2.0305676256390934]
This paper describes algorithms with rigorous convergence guarantees for computing spectral information of Koopman operators.
We compute smoothed approximations of spectral measures associated with general measure-preserving dynamical systems.
We demonstrate our algorithms on the tent map, circle rotations, Gauss iterated map, nonlinear pendulum, double pendulum, and Lorenz system.
arXiv Detail & Related papers (2021-11-29T19:01:26Z) - Sequential Subspace Search for Functional Bayesian Optimization
Incorporating Experimenter Intuition [63.011641517977644]
Our algorithm generates a sequence of finite-dimensional random subspaces of functional space spanned by a set of draws from the experimenter's Gaussian Process.
Standard Bayesian optimisation is applied on each subspace, and the best solution found used as a starting point (origin) for the next subspace.
We test our algorithm in simulated and real-world experiments, namely blind function matching, finding the optimal precipitation-strengthening function for an aluminium alloy, and learning rate schedule optimisation for deep networks.
arXiv Detail & Related papers (2020-09-08T06:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.