Learning Operators through Coefficient Mappings in Fixed Basis Spaces
- URL: http://arxiv.org/abs/2510.10350v1
- Date: Sat, 11 Oct 2025 21:47:48 GMT
- Title: Learning Operators through Coefficient Mappings in Fixed Basis Spaces
- Authors: Chuqi Chen, Yang Xiang, Weihong Zhang,
- Abstract summary: We propose the Fixed-Basis Coefficient to Coefficient Operator Network (FB-C2CNet), which learns operators in the coefficient space induced by prescribed basis functions.<n>FB-C2CNet achieves high accuracy and computational efficiency, showing its strong potential for practical operator learning tasks.
- Score: 8.02814277441424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Operator learning has emerged as a powerful paradigm for approximating solution operators of partial differential equations (PDEs) and other functional mappings. \textcolor{red}{}{Classical approaches} typically adopt a pointwise-to-pointwise framework, where input functions are sampled at prescribed locations and mapped directly to solution values. We propose the Fixed-Basis Coefficient to Coefficient Operator Network (FB-C2CNet), which learns operators in the coefficient space induced by prescribed basis functions. In this framework, the input function is projected onto a fixed set of basis functions (e.g., random features or finite element bases), and the neural operator predicts the coefficients of the solution function in the same or another basis. By decoupling basis selection from network training, FB-C2CNet reduces training complexity, enables systematic analysis of how basis choice affects approximation accuracy, and clarifies what properties of coefficient spaces (such as effective rank and coefficient variations) are critical for generalization. Numerical experiments on Darcy flow, Poisson equations in regular, complex, and high-dimensional domains, and elasticity problems demonstrate that FB-C2CNet achieves high accuracy and computational efficiency, showing its strong potential for practical operator learning tasks.
Related papers
- Outcome-Based Online Reinforcement Learning: Algorithms and Fundamental Limits [58.63897489864948]
Reinforcement learning with outcome-based feedback faces a fundamental challenge.<n>How do we assign credit to the right actions?<n>This paper provides the first comprehensive analysis of this problem in online RL with general function approximation.
arXiv Detail & Related papers (2025-05-26T17:44:08Z) - Function Forms of Simple ReLU Networks with Random Hidden Weights [1.2289361708127877]
We investigate the function space dynamics of a two-layer ReLU neural network in the infinite-width limit.<n>We highlight the Fisher information matrix's role in steering learning.<n>This work offers a robust foundation for understanding wide neural networks.
arXiv Detail & Related papers (2025-05-23T13:53:02Z) - Learning Nonlinear Finite Element Solution Operators using Multilayer Perceptrons and Energy Minimization [0.5898893619901381]
We develop and evaluate a method for learning solution operators to nonlinear problems governed by partial differential equations (PDEs)<n>The approach is based on a finite element discretization and aims at representing the solution operator by a multilayer perceptron (MLP)<n>We formulate efficient parallelizable training algorithms based on assembling the energy locally on each element.
arXiv Detail & Related papers (2024-12-05T20:19:16Z) - Generalization Bounds and Model Complexity for Kolmogorov-Arnold Networks [1.5850926890180461]
Kolmogorov-Arnold Network (KAN) is a network structure recently proposed by Liu et al.<n>Work provides a rigorous theoretical analysis of KAN by establishing generalization bounds for KAN equipped with activation functions.
arXiv Detail & Related papers (2024-10-10T15:23:21Z) - Basis-to-Basis Operator Learning Using Function Encoders [16.128154294012543]
We present Basis-to-Basis (B2B) operator learning, a novel approach for learning operators on Hilbert spaces of functions.
We derive operator learning algorithms that are directly analogous to eigen-decomposition and singular value decomposition.
arXiv Detail & Related papers (2024-09-30T19:18:34Z) - Operator Learning Using Random Features: A Tool for Scientific Computing [3.745868534225104]
Supervised operator learning centers on the use of training data to estimate maps between infinite-dimensional spaces.
This paper introduces the function-valued random features method.
It leads to a supervised operator learning architecture that is practical for nonlinear problems.
arXiv Detail & Related papers (2024-08-12T23:10:39Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - PICL: Physics Informed Contrastive Learning for Partial Differential Equations [7.136205674624813]
We develop a novel contrastive pretraining framework that improves neural operator generalization across multiple governing equations simultaneously.
A combination of physics-informed system evolution and latent-space model output are anchored to input data and used in our distance function.
We find that physics-informed contrastive pretraining improves accuracy for the Fourier Neural Operator in fixed-future and autoregressive rollout tasks for the 1D and 2D Heat, Burgers', and linear advection equations.
arXiv Detail & Related papers (2024-01-29T17:32:22Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Reinforcement Learning from Partial Observation: Linear Function Approximation with Provable Sample Efficiency [111.83670279016599]
We study reinforcement learning for partially observed decision processes (POMDPs) with infinite observation and state spaces.
We make the first attempt at partial observability and function approximation for a class of POMDPs with a linear structure.
arXiv Detail & Related papers (2022-04-20T21:15:38Z) - A Functional Perspective on Learning Symmetric Functions with Neural
Networks [48.80300074254758]
We study the learning and representation of neural networks defined on measures.
We establish approximation and generalization bounds under different choices of regularization.
The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes.
arXiv Detail & Related papers (2020-08-16T16:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.