KROM: Kernelized Reduced Order Modeling
- URL: http://arxiv.org/abs/2603.00360v1
- Date: Fri, 27 Feb 2026 22:52:22 GMT
- Title: KROM: Kernelized Reduced Order Modeling
- Authors: Aras Bacho, Jonghyeon Lee, Houman Owhadi,
- Abstract summary: KROM formulates PDE solution as a minimum-norm (Gaussian-process) recovery problem in an RKHS.<n>A central ingredient is an empirical kernel constructed from a snapshot library of PDE solutions.
- Score: 3.988493458010939
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose KROM, a kernel-based reduced-order framework for fast solution of nonlinear partial differential equations. KROM formulates PDE solution as a minimum-norm (Gaussian-process) recovery problem in an RKHS, and accelerates the resulting kernel solves by sparsifying the precision matrix via sparse Cholesky factorization. A central ingredient is an empirical kernel constructed from a snapshot library of PDE solutions (generated under varying forcings, initial data, boundary data, or parameters). This snapshot-driven kernel adapts to problem-specific structure -- boundary behavior, oscillations, nonsmooth features, linear constraints, conservation and dissipation laws -- thereby reducing the dependence on hand-tuned stationary kernels. The resulting method yields an implicit reduced model: after sparsification, only a localized subset of effective degrees of freedom is used online. We report numerical results for semilinear elliptic equations, discontinuous-coefficient Darcy flow, viscous Burgers, Allen--Cahn, and two-dimensional Navier--Stokes, showing that empirical kernels can match or outperform Matérn baselines, especially in nonsmooth regimes. We also provide error bounds that separate discretization effects, snapshot-space approximation error, and sparse-Cholesky approximation error.
Related papers
- Graph-based Clustering Revisited: A Relaxation of Kernel $k$-Means Perspective [73.18641268511318]
We propose a graph-based clustering algorithm that only relaxes the orthonormal constraint to derive clustering results.<n>To ensure a doubly constraint into a gradient, we transform the non-negative constraint into a class probability parameter.
arXiv Detail & Related papers (2025-09-23T09:14:39Z) - Nonparametric learning of stochastic differential equations from sparse and noisy data [2.389598109913754]
We learn the entire drift function directly from data without strong structural assumptions.<n>We develop an Expectation-Maximization (EM) algorithm that employs a novel Sequential Monte Carlo (SMC) method.<n>The resulting EM-SMC-RKHS procedure enables accurate estimation of the drift function of dynamical systems in low-data regimes.
arXiv Detail & Related papers (2025-08-15T17:01:59Z) - Inertial Quadratic Majorization Minimization with Application to Kernel Regularized Learning [1.0282274843007797]
We introduce the Quadratic Majorization Minimization with Extrapolation (QMME) framework and establish its sequential convergence properties.<n>To demonstrate practical advantages, we apply QMME to large-scale kernel regularized learning problems.
arXiv Detail & Related papers (2025-07-06T05:17:28Z) - Generalization Bound of Gradient Flow through Training Trajectory and Data-dependent Kernel [55.82768375605861]
We establish a generalization bound for gradient flow that aligns with the classical Rademacher complexity for kernel methods.<n>Unlike static kernels such as NTK, the LPK captures the entire training trajectory, adapting to both data and optimization dynamics.
arXiv Detail & Related papers (2025-06-12T23:17:09Z) - Toward Efficient Kernel-Based Solvers for Nonlinear PDEs [19.975293084297014]
We introduce a novel kernel learning framework toward efficiently solving nonlinear partial differential equations (PDEs)<n>In contrast to the state-of-the-art kernel solver that embeds differential operators within kernels, our approach eliminates these operators from the kernel.<n>We model the solution using a standard kernel form and differentiate the interpolant to compute the derivatives.
arXiv Detail & Related papers (2024-10-15T01:00:43Z) - Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs [19.1312659245072]
We present GIOROM, a data-driven discretization invariant framework for accelerating Lagrangian simulations through reduced-order modeling (ROM)<n>We leverage a data-driven graph-based neural approximation of the PDE solution operator.<n>GIOROM achieves a 6.6$times$-32$times$ reduction in input dimensionality while maintaining high-fidelity reconstructions across diverse Lagrangian regimes.
arXiv Detail & Related papers (2024-07-04T13:37:26Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Sparse Cholesky Factorization for Solving Nonlinear PDEs via Gaussian
Processes [3.750429354590631]
We present a sparse Cholesky factorization algorithm for dense kernel matrices.
We numerically illustrate our algorithm's near-linear space/time complexity for a broad class of nonlinear PDEs.
arXiv Detail & Related papers (2023-04-03T18:35:28Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Flow-based Kernel Prior with Application to Blind Super-Resolution [143.21527713002354]
Kernel estimation is generally one of the key problems for blind image super-resolution (SR)
This paper proposes a normalizing flow-based kernel prior (FKP) for kernel modeling.
Experiments on synthetic and real-world images demonstrate that the proposed FKP can significantly improve the kernel estimation accuracy.
arXiv Detail & Related papers (2021-03-29T22:37:06Z) - A Bregman Method for Structure Learning on Sparse Directed Acyclic
Graphs [84.7328507118758]
We develop a Bregman proximal gradient method for structure learning.
We measure the impact of curvature against a highly nonlinear iteration.
We test our method on various synthetic and real sets.
arXiv Detail & Related papers (2020-11-05T11:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.