Trace Regularity PINNs: Enforcing $\mathrm{H}^{\frac{1}{2}}(\partial Ω)$ for Boundary Data
- URL: http://arxiv.org/abs/2510.16817v1
- Date: Sun, 19 Oct 2025 13:08:16 GMT
- Title: Trace Regularity PINNs: Enforcing $\mathrm{H}^{\frac{1}{2}}(\partial Ω)$ for Boundary Data
- Authors: Doyoon Kim, Junbin Song,
- Abstract summary: We propose an enhanced physics-informed neural network (PINN)<n>The Trace Regularity Physics-Informed Neural Network (TRPINN) enforces the boundary loss in the Sobolev-Slobodeckij norm $H1/2(partial Omega)$.<n>By incorporating the exact $H1/2(partial Omega)$ norm, we show that the approximation converges to the true solution in the $H1(Omega)$ sense.
- Score: 0.48342038441006796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an enhanced physics-informed neural network (PINN), the Trace Regularity Physics-Informed Neural Network (TRPINN), which enforces the boundary loss in the Sobolev-Slobodeckij norm $H^{1/2}(\partial \Omega)$, the correct trace space associated with $H^1(\Omega)$. We reduce computational cost by computing only the theoretically essential portion of the semi-norm and enhance convergence stability by avoiding denominator evaluations in the discretization. By incorporating the exact $H^{1/2}(\partial \Omega)$ norm, we show that the approximation converges to the true solution in the $H^{1}(\Omega)$ sense, and, through Neural Tangent Kernel (NTK) analysis, we demonstrate that TRPINN can converge faster than standard PINNs. Numerical experiments on the Laplace equation with highly oscillatory Dirichlet boundary conditions exhibit cases where TRPINN succeeds even when standard PINNs fail, and show performance improvements of one to three decimal digits.
Related papers
- Constructive Lyapunov Functions via Topology-Preserving Neural Networks [0.0]
ONN achieves order-optimal performance on convergence rate, edge efficiency, and computational complexity.<n> Empirical validation on 3M-node semantic networks demonstrates 99.75% improvement over baseline methods.<n>ORTSF integration into transformers achieves 14.7% perplexity reduction and 2.3 faster convergence.
arXiv Detail & Related papers (2025-10-10T17:46:52Z) - Achievable rates in non-asymptotic bosonic quantum communication [0.9999629695552193]
We find easily computable lower bounds on the non-asymptotic capacities of Gaussian channels.<n>We design the first algorithm capable of computing the trace distance between two Gaussian states up to a fixed precision.
arXiv Detail & Related papers (2025-02-08T11:12:26Z) - Convergence Rate Analysis of LION [54.28350823319057]
LION converges iterations of $cal(sqrtdK-)$ measured by gradient Karush-Kuhn-T (sqrtdK-)$.
We show that LION can achieve lower loss and higher performance compared to standard SGD.
arXiv Detail & Related papers (2024-11-12T11:30:53Z) - Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Generalization and Stability of Interpolating Neural Networks with
Minimal Width [37.908159361149835]
We investigate the generalization and optimization of shallow neural-networks trained by gradient in the interpolating regime.
We prove the training loss number minimizations $m=Omega(log4 (n))$ neurons and neurons $Tapprox n$.
With $m=Omega(log4 (n))$ neurons and $Tapprox n$, we bound the test loss training by $tildeO (1/)$.
arXiv Detail & Related papers (2023-02-18T05:06:15Z) - Bounding the Width of Neural Networks via Coupled Initialization -- A
Worst Case Analysis [121.9821494461427]
We show how to significantly reduce the number of neurons required for two-layer ReLU networks.
We also prove new lower bounds that improve upon prior work, and that under certain assumptions, are best possible.
arXiv Detail & Related papers (2022-06-26T06:51:31Z) - A Law of Robustness beyond Isoperimetry [84.33752026418045]
We prove a Lipschitzness lower bound $Omega(sqrtn/p)$ of robustness of interpolating neural network parameters on arbitrary distributions.
We then show the potential benefit of overparametrization for smooth data when $n=mathrmpoly(d)$.
We disprove the potential existence of an $O(1)$-Lipschitz robust interpolating function when $n=exp(omega(d))$.
arXiv Detail & Related papers (2022-02-23T16:10:23Z) - A lower bound on the space overhead of fault-tolerant quantum computation [51.723084600243716]
The threshold theorem is a fundamental result in the theory of fault-tolerant quantum computation.
We prove an exponential upper bound on the maximal length of fault-tolerant quantum computation with amplitude noise.
arXiv Detail & Related papers (2022-01-31T22:19:49Z) - Optimal Approximation Rates and Metric Entropy of ReLU$^k$ and Cosine
Networks [0.0]
We show that the largest Banach space of functions which can be efficiently approximated by the corresponding shallow neural networks is the space whose norm is given by the gauge of the closed convex hull of the set $pmsigma(omegacdot x + b)$.
We establish the precises of the $L2$-metric entropy of the unit ball of these guage spaces and, as a consequence, the optimal approximation rates for shallow ReLU$k$ networks.
arXiv Detail & Related papers (2021-01-29T02:29:48Z) - Quantum Algorithms for Simulating the Lattice Schwinger Model [63.18141027763459]
We give scalable, explicit digital quantum algorithms to simulate the lattice Schwinger model in both NISQ and fault-tolerant settings.
In lattice units, we find a Schwinger model on $N/2$ physical sites with coupling constant $x-1/2$ and electric field cutoff $x-1/2Lambda$.
We estimate observables which we cost in both the NISQ and fault-tolerant settings by assuming a simple target observable---the mean pair density.
arXiv Detail & Related papers (2020-02-25T19:18:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.