SVD-NO: Learning PDE Solution Operators with SVD Integral Kernels
- URL: http://arxiv.org/abs/2511.10025v1
- Date: Fri, 14 Nov 2025 01:26:58 GMT
- Title: SVD-NO: Learning PDE Solution Operators with SVD Integral Kernels
- Authors: Noam Koren, Ralf J. J. Mackenbach, Ruud J. G. van Sloun, Kira Radinsky, Daniel Freedman,
- Abstract summary: We present SVD-NO, a neural operator that parameterizes the kernel by its singular-value decomposition (SVD) and then carries out the integral directly in the low-rank basis.<n>As SVD-NO approximates the full kernel, it obtains a high de- gree of expressivity.
- Score: 35.16133249685271
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural operators have emerged as a promising paradigm for learning solution operators of partial differential equa- tions (PDEs) directly from data. Existing methods, such as those based on Fourier or graph techniques, make strong as- sumptions about the structure of the kernel integral opera- tor, assumptions which may limit expressivity. We present SVD-NO, a neural operator that explicitly parameterizes the kernel by its singular-value decomposition (SVD) and then carries out the integral directly in the low-rank basis. Two lightweight networks learn the left and right singular func- tions, a diagonal parameter matrix learns the singular values, and a Gram-matrix regularizer enforces orthonormality. As SVD-NO approximates the full kernel, it obtains a high de- gree of expressivity. Furthermore, due to its low-rank struc- ture the computational complexity of applying the operator remains reasonable, leading to a practical system. In exten- sive evaluations on five diverse benchmark equations, SVD- NO achieves a new state of the art. In particular, SVD-NO provides greater performance gains on PDEs whose solutions are highly spatially variable. The code of this work is publicly available at https://github.com/2noamk/SVDNO.git.
Related papers
- A Kernel-based Stochastic Approximation Framework for Nonlinear Operator Learning [7.820614736576814]
We develop a framework for approximations between infinite-dimensional spaces using general Mercer operator-valued kernels.<n>Within this framework, we establish dimension-free convergence rates, demonstrating that nonlinear operator learning can overcome the curse of dimensionality.<n>This framework accommodates a wide range of operator learning tasks, ranging from integral operators to architectures based on encoder-decoder representations.
arXiv Detail & Related papers (2025-09-14T03:33:36Z) - Efficient Parametric SVD of Koopman Operator for Stochastic Dynamical Systems [51.54065545849027]
The Koopman operator provides a principled framework for analyzing nonlinear dynamical systems.<n>VAMPnet and DPNet have been proposed to learn the leading singular subspaces of the Koopman operator.<n>We propose a scalable and conceptually simple method for learning the top-$k$ singular functions of the Koopman operator.
arXiv Detail & Related papers (2025-07-09T18:55:48Z) - Redefining Neural Operators in $d+1$ Dimensions [16.36239488071517]
We redefine neural operators on a new $d+1$ dimensional domain based on Schr"odingerisation method.<n>Within this framework, we implement a Schr"odingerised Kernel Neural Operator (SKNO) aligning better with the $d+1$ dimensional evolution.<n>In experiments, the $d+1$ dimensional evolving designs in our SKNO consistently outperform other baselines.
arXiv Detail & Related papers (2025-05-17T00:15:00Z) - DimINO: Dimension-Informed Neural Operator Learning [41.37905663176428]
DimINO is a framework inspired by dimensional analysis.<n>It can be seamlessly integrated into existing neural operator architectures.<n>It achieves up to 76.3% performance gain on PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Kernel Neural Operators (KNOs) for Scalable, Memory-efficient, Geometrically-flexible Operator Learning [11.165984223900104]
This paper introduces the Kernel Neural Operator (KNO), a provably convergent operator-learning architecture.<n>The KNO decouples the choice of kernel from the numerical integration scheme (quadrature)<n> Numerical results demonstrate that on existing benchmarks the training and test accuracy of KNOs is comparable to or higher than popular operator learning techniques.
arXiv Detail & Related papers (2024-06-30T19:28:12Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - GIT-Net: Generalized Integral Transform for Operator Learning [58.13313857603536]
This article introduces GIT-Net, a deep neural network architecture for approximating Partial Differential Equation (PDE) operators.
GIT-Net harnesses the fact that differential operators commonly used for defining PDEs can often be represented parsimoniously when expressed in specialized functional bases.
Numerical experiments demonstrate that GIT-Net is a competitive neural network operator, exhibiting small test errors and low evaluations across a range of PDE problems.
arXiv Detail & Related papers (2023-12-05T03:03:54Z) - Convolutional Neural Operators for robust and accurate learning of PDEs [11.562748612983956]
We present novel adaptations for convolutional neural networks to process functions as inputs and outputs.
The resulting architecture is termed as convolutional neural operators (CNOs)
We prove a universality theorem to show that CNOs can approximate operators arising in PDEs to desired accuracy.
arXiv Detail & Related papers (2023-02-02T15:54:45Z) - Why Approximate Matrix Square Root Outperforms Accurate SVD in Global
Covariance Pooling? [59.820507600960745]
We propose a new GCP meta-layer that uses SVD in the forward pass, and Pad'e Approximants in the backward propagation to compute the gradients.
The proposed meta-layer has been integrated into different CNN models and achieves state-of-the-art performances on both large-scale and fine-grained datasets.
arXiv Detail & Related papers (2021-05-06T08:03:45Z) - Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality
Regularization and Singular Value Sparsification [53.50708351813565]
We propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy.
arXiv Detail & Related papers (2020-04-20T02:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.