Physics-Informed Chebyshev Polynomial Neural Operator for Parametric Partial Differential Equations
- URL: http://arxiv.org/abs/2602.01737v1
- Date: Mon, 02 Feb 2026 07:19:56 GMT
- Title: Physics-Informed Chebyshev Polynomial Neural Operator for Parametric Partial Differential Equations
- Authors: Biao Chen, Jing Wang, Hairun Xie, Qineng Wang, Shuai Zhang, Yifan Xia, Jifa Zhang,
- Abstract summary: We introduce the Physics-Informed Chebyshev Polynomial Neural Operator (CPNO)<n>CPNO replaces unstable monomial expansions with numerically stable Chebyshev spectral basis.<n> Experiments on benchmark parameterized PDEs show that CPNO achieves superior accuracy, faster convergence, and enhanced robustness to hyper parameters.
- Score: 17.758049557300826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural operators have emerged as powerful deep learning frameworks for approximating solution operators of parameterized partial differential equations (PDE). However, current methods predominantly rely on multilayer perceptrons (MLPs) for mapping inputs to solutions, which impairs training robustness in physics-informed settings due to inherent spectral biases and fixed activation functions. To overcome the architectural limitations, we introduce the Physics-Informed Chebyshev Polynomial Neural Operator (CPNO), a novel mesh-free framework that leverages a basis transformation to replace unstable monomial expansions with the numerically stable Chebyshev spectral basis. By integrating parameter dependent modulation mechanism to main net, CPNO constructs PDE solutions in a near-optimal functional space, decoupling the model from MLP-specific constraints and enhancing multi-scale representation. Theoretical analysis demonstrates the Chebyshev basis's near-minimax uniform approximation properties and superior conditioning, with Lebesgue constants growing logarithmically with degree, thereby mitigating spectral bias and ensuring stable gradient flow during optimization. Numerical experiments on benchmark parameterized PDEs show that CPNO achieves superior accuracy, faster convergence, and enhanced robustness to hyperparameters. The experiment of transonic airfoil flow has demonstrated the capability of CPNO in characterizing complex geometric problems.
Related papers
- Variational (Energy-Based) Spectral Learning: A Machine Learning Framework for Solving Partial Differential Equations [0.0]
We introduce variational spectral learning (VSL), a machine learning framework for solving partial differential equations (PDEs)<n>VSL offers a principled bridge between variational PDE theory, spectral discretization, and contemporary machine learning practice.
arXiv Detail & Related papers (2026-01-05T19:03:58Z) - Fast spectral separation method for kinetic equation with anisotropic non-stationary collision operator retaining micro-model fidelity [13.462104954140088]
We present a data-driven collisional operator for one-component plasmas, learned from molecular dynamics simulations.<n>The proposed operator features an anisotropic, non-stationary collision kernel that accounts for particle correlations.<n> Numerical experiments demonstrate that the proposed model accurately captures plasma dynamics in the moderately coupled regime.
arXiv Detail & Related papers (2025-10-16T19:27:03Z) - A Variational Physics-Informed Neural Network Framework Using Petrov-Galerkin Method for Solving Singularly Perturbed Boundary Value Problems [14.126509388112302]
This work proposes a framework that integrates the Petrov-Galerkin formulation with deep neural networks (DNNs)<n>It solves one-dimensional singularly perturbed boundary value problems (BVPs) and parabolic partial differential equations (PDEs) involving one or two small parameters.
arXiv Detail & Related papers (2025-09-13T18:25:00Z) - Gaussian process surrogate with physical law-corrected prior for multi-coupled PDEs defined on irregular geometry [3.3798563347021093]
Parametric partial differential equations (PDEs) are fundamental mathematical tools for modeling complex physical systems.<n>We propose a novel physical law-corrected prior Gaussian process (LC-prior GP) surrogate modeling framework.
arXiv Detail & Related papers (2025-09-01T02:40:32Z) - Grassmann Variational Monte Carlo with neural wave functions [45.935798913942904]
We formalize the framework introduced by Pfau et al.citepfau2024accurate in terms of Grassmann geometry of the Hilbert space.<n>We validate our approach on the Heisenberg quantum spin model on the square lattice, achieving highly accurate energies and physical observables for a large number of excited states.
arXiv Detail & Related papers (2025-07-14T13:53:13Z) - KITINet: Kinetics Theory Inspired Network Architectures with PDE Simulation Approaches [43.872190335490515]
This paper introduces KITINet, a novel architecture that reinterprets feature propagation through the lens of non-equilibrium particle dynamics.<n>At its core, we propose a residual module that models update as the evolution of a particle system.<n>This formulation mimics particle collisions and energy exchange, enabling adaptive feature refinement via physics-informed interactions.
arXiv Detail & Related papers (2025-05-23T13:58:29Z) - KO: Kinetics-inspired Neural Optimizer with PDE Simulation Approaches [45.173398806932376]
This paper introduces KO, a novel neural gradient inspired by kinetic theory and partial differential equation (PDE) simulations.<n>We reimagine the dynamics of network parameters as the evolution of a particle system governed by kinetic principles.<n>This physics-driven approach inherently promotes parameter diversity during optimization, mitigating the phenomenon of parameter condensation.
arXiv Detail & Related papers (2025-05-20T18:00:01Z) - Preconditioned FEM-based Neural Networks for Solving Incompressible Fluid Flows and Related Inverse Problems [41.94295877935867]
numerical simulation and optimization of technical systems described by partial differential equations is expensive.<n>A comparatively new approach in this context is to combine the good approximation properties of neural networks with the classical finite element method.<n>In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively.
arXiv Detail & Related papers (2024-09-06T07:17:01Z) - Enhancing Solutions for Complex PDEs: Introducing Complementary Convolution and Equivariant Attention in Fourier Neural Operators [17.91230192726962]
We propose a novel hierarchical Fourier neural operator along with convolution-residual layers and attention mechanisms to solve complex PDEs.
We find that the proposed method achieves superior performance in these PDE benchmarks, especially for equations characterized by rapid coefficient variations.
arXiv Detail & Related papers (2023-11-21T11:04:13Z) - A Deep Unrolling Model with Hybrid Optimization Structure for Hyperspectral Image Deconvolution [50.13564338607482]
We propose a novel optimization framework for the hyperspectral deconvolution problem, called DeepMix.<n>It consists of three distinct modules, namely, a data consistency module, a module that enforces the effect of the handcrafted regularizers, and a denoising module.<n>This work proposes a context aware denoising module designed to sustain the advancements achieved by the cooperative efforts of the other modules.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.<n>We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.<n>Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.