Equivariance and partial observations in Koopman operator theory for partial differential equations
- URL: http://arxiv.org/abs/2307.15325v3
- Date: Tue, 05 Nov 2024 10:39:38 GMT
- Title: Equivariance and partial observations in Koopman operator theory for partial differential equations
- Authors: Sebastian Peitz, Hans Harder, Feliks Nüske, Friedrich Philipp, Manuel Schaller, Karl Worthmann,
- Abstract summary: We show that symmetries in the system dynamics can be carried over to the Koopman operator.
We address the highly-relevant case where we cannot measure the full state.
- Score: 1.099532646524593
- License:
- Abstract: The Koopman operator has become an essential tool for data-driven analysis, prediction and control of complex systems. The main reason is the enormous potential of identifying linear function space representations of nonlinear dynamics from measurements. This equally applies to ordinary, stochastic, and partial differential equations (PDEs). Until now, with a few exceptions only, the PDE case is mostly treated rather superficially, and the specific structure of the underlying dynamics is largely ignored. In this paper, we show that symmetries in the system dynamics can be carried over to the Koopman operator, which allows us to significantly increase the model efficacy. Moreover, the situation where we only have access to partial observations (i.e., measurements, as is very common for experimental data) has not been treated to its full extent, either. Moreover, we address the highly-relevant case where we cannot measure the full state, where alternative approaches (e.g., delay coordinates) have to be considered. We derive rigorous statements on the required number of observables in this situation, based on embedding theory. We present numerical evidence using various numerical examples including the wave equation and the Kuramoto-Sivashinsky equation.
Related papers
- Conformal inference for regression on Riemannian Manifolds [49.7719149179179]
We investigate prediction sets for regression scenarios when the response variable, denoted by $Y$, resides in a manifold, and the covariable, denoted by X, lies in Euclidean space.
We prove the almost sure convergence of the empirical version of these regions on the manifold to their population counterparts.
arXiv Detail & Related papers (2023-10-12T10:56:25Z) - Beyond expectations: Residual Dynamic Mode Decomposition and Variance
for Stochastic Dynamical Systems [8.259767785187805]
Dynamic Mode Decomposition (DMD) is the poster child of projection-based methods.
We introduce the concept of variance-pseudospectra to gauge statistical coherency.
Our study concludes with practical applications using both simulated and experimental data.
arXiv Detail & Related papers (2023-08-21T13:05:12Z) - Algorithmic Stability of Heavy-Tailed SGD with General Loss Functions [13.431453056203226]
Heavy-tail phenomena in Wasserstein descent (SGD) have been reported several empirical observations.
This paper develops bounds for generalization functions as well as general gradient functions.
Very recently, they shed more light to the empirical observations, thanks to the generality of the loss functions.
arXiv Detail & Related papers (2023-01-27T17:57:35Z) - Discovering Latent Causal Variables via Mechanism Sparsity: A New
Principle for Nonlinear ICA [81.4991350761909]
Independent component analysis (ICA) refers to an ensemble of methods which formalize this goal and provide estimation procedure for practical application.
We show that the latent variables can be recovered up to a permutation if one regularizes the latent mechanisms to be sparse.
arXiv Detail & Related papers (2021-07-21T14:22:14Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Deconfounded Score Method: Scoring DAGs with Dense Unobserved
Confounding [101.35070661471124]
We show that unobserved confounding leaves a characteristic footprint in the observed data distribution that allows for disentangling spurious and causal effects.
We propose an adjusted score-based causal discovery algorithm that may be implemented with general-purpose solvers and scales to high-dimensional problems.
arXiv Detail & Related papers (2021-03-28T11:07:59Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Probing symmetries of quantum many-body systems through gap ratio
statistics [0.0]
We extend the study of the gap ratio distribution P(r) to the case where discrete symmetries are present.
We present a large set of applications in many-body physics, ranging from quantum clock models and anyonic chains to periodically-driven spin systems.
arXiv Detail & Related papers (2020-08-25T17:11:40Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Recent advances in the calculation of dynamical correlation functions [0.0]
Time-dependent correlation functions play a central role in both the theoretical and experimental understanding of dynamic properties.
The method of recurrence relation has, at its foundation, the solution of Heisenberg equation of motion of an operator in a many-body interacting system.
In this work, we discuss the most relevant applications of the method of recurrence relations and numerical calculations based on exact diagonalizations.
arXiv Detail & Related papers (2020-05-28T18:33:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.