CFO: Learning Continuous-Time PDE Dynamics via Flow-Matched Neural Operators
- URL: http://arxiv.org/abs/2512.05297v1
- Date: Thu, 04 Dec 2025 22:33:29 GMT
- Title: CFO: Learning Continuous-Time PDE Dynamics via Flow-Matched Neural Operators
- Authors: Xianglong Hou, Xinquan Huang, Paris Perdikaris,
- Abstract summary: Continuous Flow Operator (CFO) learns continuous-time PDE dynamics without the computational burden of standard continuous approaches, e.g., neural ODE.<n>CFO fits temporal splines to trajectory data, using finite-difference estimates of time derivatives at knots to construct probability paths whose velocities closely approximate the true PDE dynamics.<n>A neural operator is then trained via flow matching to predict these analytic velocity fields.
- Score: 9.273461312644345
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural operator surrogates for time-dependent partial differential equations (PDEs) conventionally employ autoregressive prediction schemes, which accumulate error over long rollouts and require uniform temporal discretization. We introduce the Continuous Flow Operator (CFO), a framework that learns continuous-time PDE dynamics without the computational burden of standard continuous approaches, e.g., neural ODE. The key insight is repurposing flow matching to directly learn the right-hand side of PDEs without backpropagating through ODE solvers. CFO fits temporal splines to trajectory data, using finite-difference estimates of time derivatives at knots to construct probability paths whose velocities closely approximate the true PDE dynamics. A neural operator is then trained via flow matching to predict these analytic velocity fields. This approach is inherently time-resolution invariant: training accepts trajectories sampled on arbitrary, non-uniform time grids while inference queries solutions at any temporal resolution through ODE integration. Across four benchmarks (Lorenz, 1D Burgers, 2D diffusion-reaction, 2D shallow water), CFO demonstrates superior long-horizon stability and remarkable data efficiency. CFO trained on only 25% of irregularly subsampled time points outperforms autoregressive baselines trained on complete data, with relative error reductions up to 87%. Despite requiring numerical integration at inference, CFO achieves competitive efficiency, outperforming autoregressive baselines using only 50% of their function evaluations, while uniquely enabling reverse-time inference and arbitrary temporal querying.
Related papers
- Physics-Informed Laplace Neural Operator for Solving Partial Differential Equations [11.064132774859553]
Physics-Informed Laplace Neural Operator (PILNO) is a fast surrogate solver for partial differential equations.<n>It embeds physics into training through PDE, boundary condition, and initial condition residuals.<n>PILNO consistently improves accuracy in small-data settings, reduces run-to-run variability across random seeds, and achieves stronger generalization than purely data-driven baselines.
arXiv Detail & Related papers (2026-02-13T08:19:40Z) - Temporal Pair Consistency for Variance-Reduced Flow Matching [13.328987133593154]
Temporal Pair Consistency (TPC) is a lightweight variance-reduction principle that couples velocity predictions at paired timesteps along the same probability path.<n>Instantiated within flow matching, TPC improves sample quality and efficiency across CIFAR-10 and ImageNet at multiple resolutions.
arXiv Detail & Related papers (2026-02-04T00:05:21Z) - Generative Modeling with Continuous Flows: Sample Complexity of Flow Matching [60.37045080890305]
We provide the first analysis of the sample complexity for flow-matching based generative models.<n>We decompose the velocity field estimation error into neural-network approximation error, statistical error due to the finite sample size, and optimization error due to the finite number of optimization steps for estimating the velocity field.
arXiv Detail & Related papers (2025-12-01T05:14:25Z) - NOWS: Neural Operator Warm Starts for Accelerating Iterative Solvers [1.8117099374299037]
Partial differential equations (PDEs) underpin quantitative descriptions across the physical sciences and engineering.<n>Data-driven surrogates can be strikingly fast but are often unreliable when applied outside their training distribution.<n>Here we introduce Neural Operator Warm Starts (NOWS), a hybrid strategy that harnesses learned solution operators to accelerate classical iterative solvers.
arXiv Detail & Related papers (2025-11-04T11:12:27Z) - Neural Stochastic Flows: Solver-Free Modelling and Inference for SDE Solutions [23.147474211347856]
We introduce Neural Flows (NSFs) and their latent variants, which directly learn (latent) SDE transition laws.<n>Experiments on synthetic SDE simulations and on real-world tracking and video data show that NSFs maintain distributional accuracy comparable to numerical approaches.
arXiv Detail & Related papers (2025-10-29T17:59:06Z) - TI-DeepONet: Learnable Time Integration for Stable Long-Term Extrapolation [0.0]
TI-DeepONet is a framework that integrates neural operators with adaptive numerical time-stepping techniques.<n>This research establishes a physics-aware operator learning paradigm that bridges neural approximation with numerical analysis.
arXiv Detail & Related papers (2025-05-22T23:24:31Z) - MultiPDENet: PDE-embedded Learning with Multi-time-stepping for Accelerated Flow Simulation [48.41289705783405]
We propose a PDE-embedded network with multiscale time stepping (MultiPDENet)<n>In particular, we design a convolutional filter based on the structure of finite difference with a small number of parameters to optimize.<n>A Physics Block with a 4th-order Runge-Kutta integrator at the fine time scale is established that embeds the structure of PDEs to guide the prediction.
arXiv Detail & Related papers (2025-01-27T12:15:51Z) - Simulation-Free Training of Neural ODEs on Paired Data [20.36333430055869]
We employ the flow matching framework for simulation-free training of NODEs.
We show that applying flow matching directly between paired data can often lead to an ill-defined flow.
We propose a simple extension that applies flow matching in the embedding space of data pairs.
arXiv Detail & Related papers (2024-10-30T11:18:27Z) - Distributed Stochastic Gradient Descent with Staleness: A Stochastic Delay Differential Equation Based Framework [56.82432591933544]
Distributed gradient descent (SGD) has attracted considerable recent attention due to its potential for scaling computational resources, reducing training time, and helping protect user privacy in machine learning.<n>This paper presents the run time and staleness of distributed SGD based on delay differential equations (SDDEs) and the approximation of gradient arrivals.<n>It is interestingly shown that increasing the number of activated workers does not necessarily accelerate distributed SGD due to staleness.
arXiv Detail & Related papers (2024-06-17T02:56:55Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.