A joint optimization approach to identifying sparse dynamics using least squares kernel collocation
- URL: http://arxiv.org/abs/2511.18555v1
- Date: Sun, 23 Nov 2025 18:04:15 GMT
- Title: A joint optimization approach to identifying sparse dynamics using least squares kernel collocation
- Authors: Alexander W. Hsu, Ike W. Griss Salas, Jacob M. Stevens-Haas, J. Nathan Kutz, Aleksandr Aravkin, Bamdad Hosseini,
- Abstract summary: We develop an all-at-once modeling framework for learning systems of ordinary differential equations (ODE) from scarce, partial, and noisy observations of the states.<n>The proposed methodology amounts to a combination of sparse recovery strategies for the ODE over a function library combined with techniques from reproducing kernel Hilbert space (RKHS) theory for estimating the state and discretizing the ODE.
- Score: 70.13783231186183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop an all-at-once modeling framework for learning systems of ordinary differential equations (ODE) from scarce, partial, and noisy observations of the states. The proposed methodology amounts to a combination of sparse recovery strategies for the ODE over a function library combined with techniques from reproducing kernel Hilbert space (RKHS) theory for estimating the state and discretizing the ODE. Our numerical experiments reveal that the proposed strategy leads to significant gains in terms of accuracy, sample efficiency, and robustness to noise, both in terms of learning the equation and estimating the unknown states. This work demonstrates capabilities well beyond existing and widely used algorithms while extending the modeling flexibility of other recent developments in equation discovery.
Related papers
- Nonparametric learning of stochastic differential equations from sparse and noisy data [2.389598109913754]
We learn the entire drift function directly from data without strong structural assumptions.<n>We develop an Expectation-Maximization (EM) algorithm that employs a novel Sequential Monte Carlo (SMC) method.<n>The resulting EM-SMC-RKHS procedure enables accurate estimation of the drift function of dynamical systems in low-data regimes.
arXiv Detail & Related papers (2025-08-15T17:01:59Z) - Learning State-Space Models of Dynamic Systems from Arbitrary Data using Joint Embedding Predictive Architectures [1.8434042562191812]
This paper introduces a novel technique for creating world models using continuous-time dynamic systems from arbitrary observation data.<n>The proposed method integrates sequence embeddings with neural ordinary differential equations (neural ODEs)<n>It employs loss functions that enforce contractive embeddings and Lipschitz constants in state transitions to construct a well-organized latent state space.
arXiv Detail & Related papers (2025-08-14T09:46:11Z) - Equation discovery framework EPDE: Towards a better equation discovery [50.79602839359522]
We enhance the EPDE algorithm -- an evolutionary optimization-based discovery framework.<n>Our approach generates terms using fundamental building blocks such as elementary functions and individual differentials.<n>We validate our algorithm's noise resilience and overall performance by comparing its results with those from the state-of-the-art equation discovery framework SINDy.
arXiv Detail & Related papers (2024-12-28T15:58:44Z) - Response Theory via Generative Score Modeling [0.0]
We introduce an approach for analyzing the responses of dynamical systems to external perturbations that combines score-based generative modeling with the Generalized Fluctuation-Dissipation Theorem (GFDT)
The methodology enables accurate estimation of system responses, including those with non-Gaussian statistics.
arXiv Detail & Related papers (2024-02-01T21:38:10Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Distributed Bayesian Learning of Dynamic States [65.7870637855531]
The proposed algorithm is a distributed Bayesian filtering task for finite-state hidden Markov models.
It can be used for sequential state estimation, as well as for modeling opinion formation over social networks under dynamic environments.
arXiv Detail & Related papers (2022-12-05T19:40:17Z) - A Kernel Learning Method for Backward SDE Filter [1.7035011973665108]
We develop a kernel learning backward SDE filter method to propagate the state of a dynamical system based on its partial noisy observations.
We introduce a kernel learning method to learn a continuous global approximation for the conditional probability density function of the target state.
Numerical experiments demonstrate that the kernel learning backward SDE is highly effective and highly efficient.
arXiv Detail & Related papers (2022-01-25T19:49:19Z) - Feature Engineering with Regularity Structures [4.082216579462797]
We investigate the use of models from the theory of regularity structures as features in machine learning tasks.
We provide a flexible definition of a model feature vector associated to a space-time signal, along with two algorithms which illustrate ways in which these features can be combined with linear regression.
We apply these algorithms in several numerical experiments designed to learn solutions to PDEs with a given forcing and boundary data.
arXiv Detail & Related papers (2021-08-12T17:53:47Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.