DiffNet: Neural Field Solutions of Parametric Partial Differential
Equations
- URL: http://arxiv.org/abs/2110.01601v1
- Date: Mon, 4 Oct 2021 17:59:18 GMT
- Title: DiffNet: Neural Field Solutions of Parametric Partial Differential
Equations
- Authors: Biswajit Khara, Aditya Balu, Ameya Joshi, Soumik Sarkar, Chinmay
Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian
- Abstract summary: We consider a mesh-based approach for training a neural network to produce field predictions of solutions to PDEs.
We use a weighted Galerkin loss function based on the Finite Element Method (FEM) on a parametric elliptic PDE.
We prove theoretically, and illustrate with experiments, convergence results analogous to mesh convergence analysis deployed in finite element solutions to PDEs.
- Score: 30.80582606420882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a mesh-based approach for training a neural network to produce
field predictions of solutions to parametric partial differential equations
(PDEs). This approach contrasts current approaches for ``neural PDE solvers''
that employ collocation-based methods to make point-wise predictions of
solutions to PDEs. This approach has the advantage of naturally enforcing
different boundary conditions as well as ease of invoking well-developed PDE
theory -- including analysis of numerical stability and convergence -- to
obtain capacity bounds for our proposed neural networks in discretized domains.
We explore our mesh-based strategy, called DiffNet, using a weighted Galerkin
loss function based on the Finite Element Method (FEM) on a parametric elliptic
PDE. The weighted Galerkin loss (FEM loss) is similar to an energy functional
that produces improved solutions, satisfies \textit{a priori} mesh convergence,
and can model Dirichlet and Neumann boundary conditions. We prove
theoretically, and illustrate with experiments, convergence results analogous
to mesh convergence analysis deployed in finite element solutions to PDEs.
These results suggest that a mesh-based neural network approach serves as a
promising approach for solving parametric PDEs.
Related papers
- Solving Roughly Forced Nonlinear PDEs via Misspecified Kernel Methods and Neural Networks [3.1895609521267563]
We consider the use of Gaussian Processes (GPs) or Neural Networks (NNs) to numerically approximate the solutions to nonlinear partial differential equations (PDEs)
We propose a generalization of these methods to handle roughly forced nonlinear PDEs while preserving convergence guarantees with an oversmoothing GP kernel.
This is equivalent to replacing the empirical $L2$-loss on the PDE constraint by an empirical negative-Sobolev norm.
arXiv Detail & Related papers (2025-01-28T17:58:01Z) - Base Models for Parabolic Partial Differential Equations [30.565534769404536]
Parabolic partial differential equations (PDEs) appear in many disciplines to model the evolution of various mathematical objects.
It is often necessary to compute the solutions or a function of the solutions to a parametric PDE in multiple scenarios corresponding to different parameters of this PDE.
We propose a framework for finding solutions to parabolic PDEs across different scenarios by meta-learning an underlying base.
arXiv Detail & Related papers (2024-07-17T01:04:28Z) - Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Lie Point Symmetry and Physics Informed Networks [59.56218517113066]
We propose a loss function that informs the network about Lie point symmetries in the same way that PINN models try to enforce the underlying PDE through a loss function.
Our symmetry loss ensures that the infinitesimal generators of the Lie group conserve the PDE solutions.
Empirical evaluations indicate that the inductive bias introduced by the Lie point symmetries of the PDEs greatly boosts the sample efficiency of PINNs.
arXiv Detail & Related papers (2023-11-07T19:07:16Z) - Finite Element Operator Network for Solving Elliptic-type parametric PDEs [9.658853094888125]
Partial differential equations (PDEs) underlie our understanding and prediction of natural phenomena.
We propose a novel approach for solving parametric PDEs using a Finite Element Operator Network (FEONet)
arXiv Detail & Related papers (2023-08-09T03:56:07Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - Hybrid FEM-NN models: Combining artificial neural networks with the
finite element method [0.0]
We present a methodology combining neural networks with physical principle constraints in the form of partial differential equations (PDEs)
The approach allows to train neural networks while respecting the PDEs as a strong constraint in the optimisation as apposed to making them part of the loss function.
We demonstrate the method on a complex cardiac cell model problem using deep neural networks.
arXiv Detail & Related papers (2021-01-04T13:36:06Z) - Model Reduction and Neural Networks for Parametric PDEs [9.405458160620533]
We develop a framework for data-driven approximation of input-output maps between infinite-dimensional spaces.
The proposed approach is motivated by the recent successes of neural networks and deep learning.
For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology.
arXiv Detail & Related papers (2020-05-07T00:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.