PDE Solvers Should Be Local: Fast, Stable Rollouts with Learned Local Stencils
- URL: http://arxiv.org/abs/2509.26186v1
- Date: Tue, 30 Sep 2025 12:42:32 GMT
- Title: PDE Solvers Should Be Local: Fast, Stable Rollouts with Learned Local Stencils
- Authors: Chun-Wun Cheng, Bin Dong, Carola-Bibiane Schönlieb, Angelica I Aviles-Rivero,
- Abstract summary: We present FINO, a finite-difference-inspired neural architecture that enforces strict locality.<n>FINO replaces fixed finite-difference stencil coefficients with learnable convolutional kernels.<n>It achieves up to 44% lower error and up to around 2times speedups over state-of-the-art operator-learning baselines.
- Score: 20.49015396991881
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural operator models for solving partial differential equations (PDEs) often rely on global mixing mechanisms-such as spectral convolutions or attention-which tend to oversmooth sharp local dynamics and introduce high computational cost. We present FINO, a finite-difference-inspired neural architecture that enforces strict locality while retaining multiscale representational power. FINO replaces fixed finite-difference stencil coefficients with learnable convolutional kernels and evolves states via an explicit, learnable time-stepping scheme. A central Local Operator Block leverage a differential stencil layer, a gating mask, and a linear fuse step to construct adaptive derivative-like local features that propagate forward in time. Embedded in an encoder-decoder with a bottleneck, FINO captures fine-grained local structures while preserving interpretability. We establish (i) a composition error bound linking one-step approximation error to stable long-horizon rollouts under a Lipschitz condition, and (ii) a universal approximation theorem for discrete time-stepped PDE dynamics. (iii) Across six benchmarks and a climate modelling task, FINO achieves up to 44\% lower error and up to around 2\times speedups over state-of-the-art operator-learning baselines, demonstrating that strict locality with learnable time-stepping yields an accurate and scalable foundation for neural PDE solvers.
Related papers
- DInf-Grid: A Neural Differential Equation Solver with Differentiable Feature Grids [73.28614344779076]
We present a differentiable grid-based representation for efficiently solving differential equations (DEs)<n>Our results demonstrate a 5-20x speed-up over coordinate-based methods, solving differential equations in seconds or minutes while maintaining comparable accuracy and compactness.
arXiv Detail & Related papers (2026-01-15T18:59:57Z) - The Best of Both Worlds: Hybridizing Neural Operators and Solvers for Stable Long-Horizon Inference [0.0]
ANCHOR is an online, instance-aware hybrid inference framework for stable long-horizon prediction of PDEs.<n>We show that ANCHOR reliably bounds long-horizon error growth, stabilizes extrapolative rollouts, and significantly improves robustness over standalone neural operators.
arXiv Detail & Related papers (2025-12-22T18:17:28Z) - CFO: Learning Continuous-Time PDE Dynamics via Flow-Matched Neural Operators [9.273461312644345]
Continuous Flow Operator (CFO) learns continuous-time PDE dynamics without the computational burden of standard continuous approaches, e.g., neural ODE.<n>CFO fits temporal splines to trajectory data, using finite-difference estimates of time derivatives at knots to construct probability paths whose velocities closely approximate the true PDE dynamics.<n>A neural operator is then trained via flow matching to predict these analytic velocity fields.
arXiv Detail & Related papers (2025-12-04T22:33:29Z) - Progressive Localisation in Localist LLMs [0.0]
This paper demonstrates that progressive localization represents the optimal architecture for creating interpretable large language models (LLMs)<n>We investigate whether interpretability constraints can be aligned with natural semantic structure while being applied strategically across network depth.<n>We show that progressive semantic localization, combining semantic block with steep adaptive locality schedules, achieves near-baseline language modeling performance while providing interpretable attention patterns.
arXiv Detail & Related papers (2025-11-23T09:49:13Z) - Enabling Local Neural Operators to perform Equation-Free System-Level Analysis [1.2468700211588881]
Neural Operators (NOs) provide a powerful framework for computations involving physical laws.<n>We propose and implement a framework that integrates (local) NOs with advanced iterative numerical methods in the Krylov subspace.<n>We illustrate our framework via three nonlinear PDE benchmarks.
arXiv Detail & Related papers (2025-05-05T01:17:18Z) - Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.<n>Non-smooth regularization is often incorporated into machine learning tasks.<n>We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - A domain decomposition-based autoregressive deep learning model for unsteady and nonlinear partial differential equations [2.7755345520127936]
We propose a domain-decomposition-based deep learning (DL) framework, named CoMLSim, for accurately modeling unsteady and nonlinear partial differential equations (PDEs)<n>The framework consists of two key components: (a) a convolutional neural network (CNN)-based autoencoder architecture and (b) an autoregressive model composed of fully connected layers.
arXiv Detail & Related papers (2024-08-26T17:50:47Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Convergence of mean-field Langevin dynamics: Time and space
discretization, stochastic gradient, and variance reduction [49.66486092259376]
The mean-field Langevin dynamics (MFLD) is a nonlinear generalization of the Langevin dynamics that incorporates a distribution-dependent drift.
Recent works have shown that MFLD globally minimizes an entropy-regularized convex functional in the space of measures.
We provide a framework to prove a uniform-in-time propagation of chaos for MFLD that takes into account the errors due to finite-particle approximation, time-discretization, and gradient approximation.
arXiv Detail & Related papers (2023-06-12T16:28:11Z) - Global-to-Local Modeling for Video-based 3D Human Pose and Shape
Estimation [53.04781510348416]
Video-based 3D human pose and shape estimations are evaluated by intra-frame accuracy and inter-frame smoothness.
We propose to structurally decouple the modeling of long-term and short-term correlations in an end-to-end framework, Global-to-Local Transformer (GLoT)
Our GLoT surpasses previous state-of-the-art methods with the lowest model parameters on popular benchmarks, i.e., 3DPW, MPI-INF-3DHP, and Human3.6M.
arXiv Detail & Related papers (2023-03-26T14:57:49Z) - Semi-supervised Learning of Partial Differential Operators and Dynamical
Flows [68.77595310155365]
We present a novel method that combines a hyper-network solver with a Fourier Neural Operator architecture.
We test our method on various time evolution PDEs, including nonlinear fluid flows in one, two, and three spatial dimensions.
The results show that the new method improves the learning accuracy at the time point of supervision point, and is able to interpolate and the solutions to any intermediate time.
arXiv Detail & Related papers (2022-07-28T19:59:14Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.