Barron Space Representations for Elliptic PDEs with Homogeneous Boundary Conditions
- URL: http://arxiv.org/abs/2508.07559v2
- Date: Sun, 19 Oct 2025 23:36:02 GMT
- Title: Barron Space Representations for Elliptic PDEs with Homogeneous Boundary Conditions
- Authors: Ziang Chen, Liqiang Huang,
- Abstract summary: We study the approximation complexity of high-dimensional second-order elliptic PDEs with homogeneous boundary conditions on the unit hypercube.<n>Under the assumption that the coefficients belong to suitably defined Barron spaces, we prove that the solution can be efficiently approximated by two-layer neural networks.
- Score: 10.72933755002183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the approximation complexity of high-dimensional second-order elliptic PDEs with homogeneous boundary conditions on the unit hypercube, within the framework of Barron spaces. Under the assumption that the coefficients belong to suitably defined Barron spaces, we prove that the solution can be efficiently approximated by two-layer neural networks, circumventing the curse of dimensionality. Our results demonstrate the expressive power of shallow networks in capturing high-dimensional PDE solutions under appropriate structural assumptions.
Related papers
- Regularity of Second-Order Elliptic PDEs in Spectral Barron Spaces [8.73881274952982]
We establish a regularity theorem for second-order elliptic PDEs on $mathbbRd$ in spectral Barron spaces.<n>We identify a class of PDEs whose solutions can be approximated by two-layer neural networks with cosine activation functions.
arXiv Detail & Related papers (2026-02-22T23:29:57Z) - High precision PINNs in unbounded domains: application to singularity formulation in PDEs [83.50980325611066]
We study the choices of neural network ansatz, sampling strategy, and optimization algorithm.<n>For 1D Burgers equation, our framework can lead to a solution with very high precision.<n>For the 2D Boussinesq equation, we obtain a solution whose loss is $4$ digits smaller than that obtained in citewang2023asymptotic with fewer training steps.
arXiv Detail & Related papers (2025-06-24T02:01:44Z) - Mechanistic PDE Networks for Discovery of Governing Equations [52.492158106791365]
We present Mechanistic PDE Networks, a model for discovery of partial differential equations from data.<n>The represented PDEs are then solved and decoded for specific tasks.<n>We develop a native, GPU-capable, parallel, sparse, and differentiable multigrid solver specialized for linear partial differential equations.
arXiv Detail & Related papers (2025-02-25T17:21:44Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Sample Complexity of Neural Policy Mirror Descent for Policy
Optimization on Low-Dimensional Manifolds [75.51968172401394]
We study the sample complexity of the neural policy mirror descent (NPMD) algorithm with deep convolutional neural networks (CNN)
In each iteration of NPMD, both the value function and the policy can be well approximated by CNNs.
We show that NPMD can leverage the low-dimensional structure of state space to escape from the curse of dimensionality.
arXiv Detail & Related papers (2023-09-25T07:31:22Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - A Unified Hard-Constraint Framework for Solving Geometrically Complex
PDEs [25.52271761404213]
We present a unified framework for solving geometrically complex PDEs with neural networks.
We first introduce the "extra fields" from the mixed finite element method to reformulate the PDEs.
We derive the general solutions of the BCs analytically, which are employed to construct an ansatz that automatically satisfies the BCs.
arXiv Detail & Related papers (2022-10-06T06:19:33Z) - On the Representation of Solutions to Elliptic PDEs in Barron Spaces [9.875204185976777]
This paper derives complexity estimates of the solutions of $d$dimensional second-order elliptic PDEs in the Barron space.
As a direct consequence of the complexity estimates, the solution of the PDE can be approximated on any bounded domain by a two-layer neural network.
arXiv Detail & Related papers (2021-06-14T16:05:07Z) - Parametric Complexity Bounds for Approximating PDEs with Neural Networks [41.46028070204925]
We prove that when a PDE's coefficients are representable by small neural networks, the parameters required to approximate its solution scalely with the input $d$ are proportional to the parameter counts of the neural networks.
Our proof is based on constructing a neural network which simulates gradient descent in an appropriate space which converges to the solution of the PDE.
arXiv Detail & Related papers (2021-03-03T02:42:57Z) - A Priori Generalization Analysis of the Deep Ritz Method for Solving
High Dimensional Elliptic Equations [11.974322921837384]
We derive the generalization error bounds of two-layer neural networks in the framework of the Deep Ritz Method (DRM)
We prove that the convergence rates of generalization errors are independent of the dimension $d$.
We develop a new solution theory for the PDEs on the spectral Barron space.
arXiv Detail & Related papers (2021-01-05T18:50:59Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.