CODE: A global approach to ODE dynamics learning
- URL: http://arxiv.org/abs/2511.15619v1
- Date: Wed, 19 Nov 2025 17:04:24 GMT
- Title: CODE: A global approach to ODE dynamics learning
- Authors: Nils Wildt, Daniel M. Tartakovsky, Sergey Oladyshkin, Wolfgang Nowak,
- Abstract summary: In data-driven settings, one learns the ODE's right-hand side (RHS)<n>In this work we introduce ChaosODE (CODE), a Polynomial Chaos ODE Expansion.<n>We evaluate the performance of CODE in several experiments on the Lotka-Volterra system.
- Score: 1.1499574149885023
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Ordinary differential equations (ODEs) are a conventional way to describe the observed dynamics of physical systems. Scientists typically hypothesize about dynamical behavior, propose a mathematical model, and compare its predictions to data. However, modern computing and algorithmic advances now enable purely data-driven learning of governing dynamics directly from observations. In data-driven settings, one learns the ODE's right-hand side (RHS). Dense measurements are often assumed, yet high temporal resolution is typically both cumbersome and expensive. Consequently, one usually has only sparsely sampled data. In this work we introduce ChaosODE (CODE), a Polynomial Chaos ODE Expansion in which we use an arbitrary Polynomial Chaos Expansion (aPCE) for the ODE's right-hand side, resulting in a global orthonormal polynomial representation of dynamics. We evaluate the performance of CODE in several experiments on the Lotka-Volterra system, across varying noise levels, initial conditions, and predictions far into the future, even on previously unseen initial conditions. CODE exhibits remarkable extrapolation capabilities even when evaluated under novel initial conditions and shows advantages compared to well-examined methods using neural networks (NeuralODE) or kernel approximators (KernelODE) as the RHS representer. We observe that the high flexibility of NeuralODE and KernelODE degrades extrapolation capabilities under scarce data and measurement noise. Finally, we provide practical guidelines for robust optimization of dynamics-learning problems and illustrate them in the accompanying code.
Related papers
- Foundation Inference Models for Ordinary Differential Equations [3.4253416336476246]
We propose FIM-ODE, a pretrained Foundation Inference Model that amortises low-dimensional ODE inference.<n>We pretrain FIM-ODE on a prior distribution over ODEs with low-degree vector fields and represent the target field with neural operators.<n>Pretraining also provides a strong initialisation for finetuning, enabling fast and stable adaptation that outperforms modern neural and GP baselines.
arXiv Detail & Related papers (2026-02-09T14:39:11Z) - A joint optimization approach to identifying sparse dynamics using least squares kernel collocation [70.13783231186183]
We develop an all-at-once modeling framework for learning systems of ordinary differential equations (ODE) from scarce, partial, and noisy observations of the states.<n>The proposed methodology amounts to a combination of sparse recovery strategies for the ODE over a function library combined with techniques from reproducing kernel Hilbert space (RKHS) theory for estimating the state and discretizing the ODE.
arXiv Detail & Related papers (2025-11-23T18:04:15Z) - Towards Foundation Inference Models that Learn ODEs In-Context [3.4253416336476246]
We introduce FIM-ODE (Foundation Inference Model for ODEs), a pretrained neural model designed to estimate ODEs from sparse and noisy observations.<n>Trained on synthetic data, the model utilizes a flexible neural operator for robust ODE inference, even from corrupted data.<n>We empirically verify that FIM-ODE provides accurate estimates, on par with a neural state-of-the-art method, and qualitatively compare the structure of their estimated vector fields.
arXiv Detail & Related papers (2025-10-14T15:44:54Z) - No Equations Needed: Learning System Dynamics Without Relying on Closed-Form ODEs [56.78271181959529]
This paper proposes a conceptual shift to modeling low-dimensional dynamical systems by departing from the traditional two-step modeling process.<n>Instead of first discovering a closed-form equation and then analyzing it, our approach, direct semantic modeling, predicts the semantic representation of the dynamical system.<n>Our approach not only simplifies the modeling pipeline but also enhances the transparency and flexibility of the resulting models.
arXiv Detail & Related papers (2025-01-30T18:36:48Z) - Zero-shot Imputation with Foundation Inference Models for Dynamical Systems [5.549794481031468]
We offer a fresh perspective on the classical problem of imputing missing time series data, whose underlying dynamics are assumed to be determined by ODEs.<n>We propose a novel supervised learning framework for zero-shot time series imputation, through parametric functions satisfying some (hidden) ODEs.<n>We empirically demonstrate that one and the same (pretrained) recognition model can perform zero-shot imputation across 63 distinct time series with missing values.
arXiv Detail & Related papers (2024-02-12T11:48:54Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - Stabilized Neural Ordinary Differential Equations for Long-Time
Forecasting of Dynamical Systems [1.001737665513683]
We present a data-driven modeling method that accurately captures shocks and chaotic dynamics.
We learn the right-hand-side (SRH) of an ODE by adding the outputs of two NN together where one learns a linear term and the other a nonlinear term.
Specifically, we implement this by training a sparse linear convolutional NN to learn the linear term and a dense fully-connected nonlinear NN to learn the nonlinear term.
arXiv Detail & Related papers (2022-03-29T16:10:34Z) - Neural Ordinary Differential Equations for Data-Driven Reduced Order
Modeling of Environmental Hydrodynamics [4.547988283172179]
We explore the use of Neural Ordinary Differential Equations for fluid flow simulation.
Test problems we consider include incompressible flow around a cylinder and real-world applications of shallow water hydrodynamics in riverine and estuarine systems.
Our findings indicate that Neural ODEs provide an elegant framework for stable and accurate evolution of latent-space dynamics with a promising potential of extrapolatory predictions.
arXiv Detail & Related papers (2021-04-22T19:20:47Z) - STEER: Simple Temporal Regularization For Neural ODEs [80.80350769936383]
We propose a new regularization technique: randomly sampling the end time of the ODE during training.
The proposed regularization is simple to implement, has negligible overhead and is effective across a wide variety of tasks.
We show through experiments on normalizing flows, time series models and image recognition that the proposed regularization can significantly decrease training time and even improve performance over baseline models.
arXiv Detail & Related papers (2020-06-18T17:44:50Z) - On Second Order Behaviour in Augmented Neural ODEs [69.8070643951126]
We consider Second Order Neural ODEs (SONODEs)
We show how the adjoint sensitivity method can be extended to SONODEs.
We extend the theoretical understanding of the broader class of Augmented NODEs (ANODEs)
arXiv Detail & Related papers (2020-06-12T14:25:31Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.