Foundation Inference Models for Ordinary Differential Equations
- URL: http://arxiv.org/abs/2602.08733v1
- Date: Mon, 09 Feb 2026 14:39:11 GMT
- Title: Foundation Inference Models for Ordinary Differential Equations
- Authors: Maximilian Mauel, Johannes R. Hübers, David Berghaus, Patrick Seifner, Ramses J. Sanchez,
- Abstract summary: We propose FIM-ODE, a pretrained Foundation Inference Model that amortises low-dimensional ODE inference.<n>We pretrain FIM-ODE on a prior distribution over ODEs with low-degree vector fields and represent the target field with neural operators.<n>Pretraining also provides a strong initialisation for finetuning, enabling fast and stable adaptation that outperforms modern neural and GP baselines.
- Score: 3.4253416336476246
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ordinary differential equations (ODEs) are central to scientific modelling, but inferring their vector fields from noisy trajectories remains challenging. Current approaches such as symbolic regression, Gaussian process (GP) regression, and Neural ODEs often require complex training pipelines and substantial machine learning expertise, or they depend strongly on system-specific prior knowledge. We propose FIM-ODE, a pretrained Foundation Inference Model that amortises low-dimensional ODE inference by predicting the vector field directly from noisy trajectory data in a single forward pass. We pretrain FIM-ODE on a prior distribution over ODEs with low-degree polynomial vector fields and represent the target field with neural operators. FIM-ODE achieves strong zero-shot performance, matching and often improving upon ODEFormer, a recent pretrained symbolic baseline, across a range of regimes despite using a simpler pretraining prior distribution. Pretraining also provides a strong initialisation for finetuning, enabling fast and stable adaptation that outperforms modern neural and GP baselines without requiring machine learning expertise.
Related papers
- CODE: A global approach to ODE dynamics learning [1.1499574149885023]
In data-driven settings, one learns the ODE's right-hand side (RHS)<n>In this work we introduce ChaosODE (CODE), a Polynomial Chaos ODE Expansion.<n>We evaluate the performance of CODE in several experiments on the Lotka-Volterra system.
arXiv Detail & Related papers (2025-11-19T17:04:24Z) - Towards Foundation Inference Models that Learn ODEs In-Context [3.4253416336476246]
We introduce FIM-ODE (Foundation Inference Model for ODEs), a pretrained neural model designed to estimate ODEs from sparse and noisy observations.<n>Trained on synthetic data, the model utilizes a flexible neural operator for robust ODE inference, even from corrupted data.<n>We empirically verify that FIM-ODE provides accurate estimates, on par with a neural state-of-the-art method, and qualitatively compare the structure of their estimated vector fields.
arXiv Detail & Related papers (2025-10-14T15:44:54Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - Efficient Regression-Based Training of Normalizing Flows for Boltzmann Generators [85.25962679349551]
Boltzmann Generators (BGs) offer efficient sampling and likelihoods, but their training via maximum likelihood is often unstable and computationally challenging.<n>We propose Regression Training of Normalizing Flows (RegFlow), a novel and scalable-based training objective that bypasses the numerical instability and computational challenge of conventional maximum likelihood training.
arXiv Detail & Related papers (2025-06-01T20:32:27Z) - Efficient Training of Physics-enhanced Neural ODEs via Direct Collocation and Nonlinear Programming [0.0]
We propose a novel approach for training Physics-enhanced Neural ODEs (PeN-ODEs) by expressing the training process as a dynamic optimization problem.<n>The full model, including neural components, is discretized using a high-order implicit Runge-Kutta method with flipped Legendre-Gauss-Radau points.<n>This formulation enables simultaneous optimization of network parameters and state trajectories, addressing key limitations of ODE solver-based training in terms of stability, runtime, and accuracy.
arXiv Detail & Related papers (2025-05-06T14:04:46Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Bayesian Gaussian Process ODEs via Double Normalizing Flows [27.015257976208737]
We introduce normalizing flows to re parameterize the ODE vector field, resulting in a data-driven prior distribution.<n>We also apply normalizing flows to the posterior inference of GP ODEs to resolve the issue of strong mean-field assumptions.<n>We validate the effectiveness of our approach on simulated dynamical systems and real-world human motion data.
arXiv Detail & Related papers (2023-09-17T09:28:47Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z) - On the balance between the training time and interpretability of neural
ODE for time series modelling [77.34726150561087]
The paper shows that modern neural ODE cannot be reduced to simpler models for time-series modelling applications.
The complexity of neural ODE is compared to or exceeds the conventional time-series modelling tools.
We propose a new view on time-series modelling using combined neural networks and an ODE system approach.
arXiv Detail & Related papers (2022-06-07T13:49:40Z) - Distributional Gradient Matching for Learning Uncertain Neural Dynamics
Models [38.17499046781131]
We propose a novel approach towards estimating uncertain neural ODEs, avoiding the numerical integration bottleneck.
Our algorithm - distributional gradient matching (DGM) - jointly trains a smoother and a dynamics model and matches their gradients via minimizing a Wasserstein loss.
Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate.
arXiv Detail & Related papers (2021-06-22T08:40:51Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - STEER: Simple Temporal Regularization For Neural ODEs [80.80350769936383]
We propose a new regularization technique: randomly sampling the end time of the ODE during training.
The proposed regularization is simple to implement, has negligible overhead and is effective across a wide variety of tasks.
We show through experiments on normalizing flows, time series models and image recognition that the proposed regularization can significantly decrease training time and even improve performance over baseline models.
arXiv Detail & Related papers (2020-06-18T17:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.