DPG loss functions for learning parameter-to-solution maps by neural networks
- URL: http://arxiv.org/abs/2506.18773v1
- Date: Mon, 23 Jun 2025 15:40:56 GMT
- Title: DPG loss functions for learning parameter-to-solution maps by neural networks
- Authors: Pablo Cortés Castillo, Wolfgang Dahmen, Jay Gopalakrishnan,
- Abstract summary: We develop, analyze, and experimentally explore residual-based loss functions for machine learning of parameter-to-solution maps.<n>Our primary concern is on rigorous accuracy certification to enhance prediction capability of resulting deep neural network reduced models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop, analyze, and experimentally explore residual-based loss functions for machine learning of parameter-to-solution maps in the context of parameter-dependent families of partial differential equations (PDEs). Our primary concern is on rigorous accuracy certification to enhance prediction capability of resulting deep neural network reduced models. This is achieved by the use of variationally correct loss functions. Through one specific example of an elliptic PDE, details for establishing the variational correctness of a loss function from an ultraweak Discontinuous Petrov Galerkin (DPG) discretization are worked out. Despite the focus on the example, the proposed concepts apply to a much wider scope of problems, namely problems for which stable DPG formulations are available. The issue of {high-contrast} diffusion fields and ensuing difficulties with degrading ellipticity are discussed. Both numerical results and theoretical arguments illustrate that for high-contrast diffusion parameters the proposed DPG loss functions deliver much more robust performance than simpler least-squares losses.
Related papers
- PINNverse: Accurate parameter estimation in differential equations from noisy data with constrained physics-informed neural networks [0.0]
Physics-Informed Neural Networks (PINNs) have emerged as effective tools for solving such problems.<n>We introduce PINNverse, a training paradigm that addresses these limitations by reformulating the learning process as a constrained differential optimization problem.<n>We demonstrate robust and accurate parameter estimation from noisy data in four classical ODE and PDE models from physics and biology.
arXiv Detail & Related papers (2025-04-07T16:34:57Z) - Deep Operator Networks for Bayesian Parameter Estimation in PDEs [0.0]
We present a novel framework combining Deep Operator Networks (DeepONets) with Physics-Informed Neural Networks (PINNs) to solve partial differential equations (PDEs)<n>By integrating data-driven learning with physical constraints, our method achieves robust and accurate solutions across diverse scenarios.
arXiv Detail & Related papers (2025-01-18T07:41:05Z) - PIG: Physics-Informed Gaussians as Adaptive Parametric Mesh Representations [5.4087282763977855]
We propose Physics-Informed Gaussians (PIGs), which combine feature embeddings using Gaussian functions with a lightweight neural network.<n>Our approach uses trainable parameters for the mean and variance of each Gaussian, allowing for dynamic adjustment of their positions and shapes during training.<n> Experimental results show the competitive performance of our model across various PDEs, demonstrating its potential as a robust tool for solving complex PDEs.
arXiv Detail & Related papers (2024-12-08T16:58:29Z) - PACMANN: Point Adaptive Collocation Method for Artificial Neural Networks [44.99833362998488]
PINNs minimize a loss function which includes the PDE residual determined for a set of collocation points.<n>Previous work has shown that the number and distribution of these collocation points have a significant influence on the accuracy of the PINN solution.<n>We present the Point Adaptive Collocation Method for Artificial Neural Networks (PACMANN)
arXiv Detail & Related papers (2024-11-29T11:31:11Z) - DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - Lie Point Symmetry and Physics Informed Networks [59.56218517113066]
We propose a loss function that informs the network about Lie point symmetries in the same way that PINN models try to enforce the underlying PDE through a loss function.
Our symmetry loss ensures that the infinitesimal generators of the Lie group conserve the PDE solutions.
Empirical evaluations indicate that the inductive bias introduced by the Lie point symmetries of the PDEs greatly boosts the sample efficiency of PINNs.
arXiv Detail & Related papers (2023-11-07T19:07:16Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Towards Convergence Rates for Parameter Estimation in Gaussian-gated
Mixture of Experts [40.24720443257405]
We provide a convergence analysis for maximum likelihood estimation (MLE) in the Gaussian-gated MoE model.
Our findings reveal that the MLE has distinct behaviors under two complement settings of location parameters of the Gaussian gating functions.
Notably, these behaviors can be characterized by the solvability of two different systems of equations.
arXiv Detail & Related papers (2023-05-12T16:02:19Z) - Convergence analysis of unsupervised Legendre-Galerkin neural networks
for linear second-order elliptic PDEs [0.8594140167290099]
We perform the convergence analysis of unsupervised Legendre--Galerkin neural networks (ULGNet)
ULGNet is a deep-learning-based numerical method for solving partial differential equations (PDEs)
arXiv Detail & Related papers (2022-11-16T13:31:03Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - LordNet: An Efficient Neural Network for Learning to Solve Parametric Partial Differential Equations without Simulated Data [47.49194807524502]
We propose LordNet, a tunable and efficient neural network for modeling entanglements.
The experiments on solving Poisson's equation and (2D and 3D) Navier-Stokes equation demonstrate that the long-range entanglements can be well modeled by the LordNet.
arXiv Detail & Related papers (2022-06-19T14:41:08Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Robust discovery of partial differential equations in complex situations [3.7314701799132686]
A robust deep learning-genetic algorithm (R-DLGA) that incorporates the physics-informed neural network (PINN) is proposed in this work.
The stability and accuracy of the proposed R-DLGA in several complex situations are examined for proof-and-concept.
Results prove that the proposed framework is able to calculate derivatives accurately with the optimization of PINN.
arXiv Detail & Related papers (2021-05-31T02:11:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.