Walsh-Hadamard Neural Operators for Solving PDEs with Discontinuous Coefficients
- URL: http://arxiv.org/abs/2511.07347v1
- Date: Mon, 10 Nov 2025 17:49:20 GMT
- Title: Walsh-Hadamard Neural Operators for Solving PDEs with Discontinuous Coefficients
- Authors: Giorrgio M. Cavallazzi, Miguel Perex Cuadrado, Alfredo Pinelli,
- Abstract summary: Neural operators have emerged as powerful tools for learning solution operators of partial differential equations.<n>We introduce the Walsh-Hadamard Neural Operator (WHNO), which leverages Walsh-Hadamard transforms to capture global dependencies efficiently.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural operators have emerged as powerful tools for learning solution operators of partial differential equations (PDEs). However, standard spectral methods based on Fourier transforms struggle with problems involving discontinuous coefficients due to the Gibbs phenomenon and poor representation of sharp interfaces. We introduce the Walsh-Hadamard Neural Operator (WHNO), which leverages Walsh-Hadamard transforms-a spectral basis of rectangular wave functions naturally suited for piecewise constant fields-combined with learnable spectral weights that transform low-sequency Walsh coefficients to capture global dependencies efficiently. We validate WHNO on three problems: steady-state Darcy flow (preliminary validation), heat conduction with discontinuous thermal conductivity, and the 2D Burgers equation with discontinuous initial conditions. In controlled comparisons with Fourier Neural Operators (FNO) under identical conditions, WHNO demonstrates superior accuracy with better preservation of sharp solution features at material interfaces. Critically, we discover that weighted ensemble combinations of WHNO and FNO achieve substantial improvements over either model alone: for both heat conduction and Burgers equation, optimal ensembles reduce mean squared error by 35-40 percent and maximum error by up to 25 percent compared to individual models. This demonstrates that Walsh-Hadamard and Fourier representations capture complementary aspects of discontinuous PDE solutions, with WHNO excelling at sharp interfaces while FNO captures smooth features effectively.
Related papers
- Derivative-Informed Fourier Neural Operator: Universal Approximation and Applications to PDE-Constrained Optimization [0.9236074230806578]
We present approximation theories and efficient training methods for derivative-informed Fourier neural operators (DIFNOs)<n>A DIFNO is trained by minimizing its prediction error jointly on output and Fréchet derivative samples of a high-fidelity operator.<n>We show that DIFNOs are superior in sample complexity for operator learning and solving infinite-dimensional PDE-constrained inverse problems.
arXiv Detail & Related papers (2025-12-16T04:54:24Z) - Analysis of Fourier Neural Operators via Effective Field Theory [11.824913874212802]
We present a systematic effective field theory analysis of FNOs in an infinite dimensional function space.<n>We show that nonlinear activations inevitably couple frequency inputs to high frequency modes that are otherwise discarded by spectral truncation.<n>Our results quantify how nonlinearity enables neural operators to capture non-trivial features and explain why scale invariant activations and residual connections enhance feature learning in FNOs.
arXiv Detail & Related papers (2025-07-29T14:10:46Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Enhancing Solutions for Complex PDEs: Introducing Complementary Convolution and Equivariant Attention in Fourier Neural Operators [17.91230192726962]
We propose a novel hierarchical Fourier neural operator along with convolution-residual layers and attention mechanisms to solve complex PDEs.
We find that the proposed method achieves superior performance in these PDE benchmarks, especially for equations characterized by rapid coefficient variations.
arXiv Detail & Related papers (2023-11-21T11:04:13Z) - Differentiable DG with Neural Operator Source Term Correction [0.0]
We introduce an end-to-end differentiable framework for solving the compressible Navier-Stokes equations.<n>This integrated approach combines a differentiable discontinuous Galerkin solver with a neural network source term.<n>We demonstrate the performance of the proposed framework through two examples.
arXiv Detail & Related papers (2023-10-29T04:26:23Z) - Coupled Multiwavelet Neural Operator Learning for Coupled Partial Differential Equations [12.78654908572053]
We propose a textitcoupled multiwavelets neural operator (CMWNO) learning scheme by decoupling the coupled integral kernels.<n>The proposed model achieves significantly higher accuracy compared to previous learning-based solvers.
arXiv Detail & Related papers (2023-03-04T03:06:47Z) - FC-PINO: High Precision Physics-Informed Neural Operators via Fourier Continuation [60.706803227003995]
We introduce the FC-PINO (Fourier-Continuation-based PINO) architecture which extends the accuracy and efficiency of PINO to non-periodic and non-smooth PDEs.<n>We demonstrate that standard PINO struggles to solve non-periodic and non-smooth PDEs with high precision, across challenging benchmarks.<n>In contrast, the proposed FC-PINO provides accurate, robust, and scalable solutions, substantially outperforming PINO alternatives.
arXiv Detail & Related papers (2022-11-29T06:37:54Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Structural aspects of FRG in quantum tunnelling computations [68.8204255655161]
We probe both the unidimensional quartic harmonic oscillator and the double well potential.
Two partial differential equations for the potential V_k(varphi) and the wave function renormalization Z_k(varphi) are studied.
arXiv Detail & Related papers (2022-06-14T15:23:25Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.