Learning Domain-Independent Green's Function For Elliptic Partial
Differential Equations
- URL: http://arxiv.org/abs/2401.17172v1
- Date: Tue, 30 Jan 2024 17:00:22 GMT
- Title: Learning Domain-Independent Green's Function For Elliptic Partial
Differential Equations
- Authors: Pawan Negi, Maggie Cheng, Mahesh Krishnamurthy, Wenjun Ying, Shuwang
Li
- Abstract summary: Green's function characterizes a partial differential equation (PDE) and maps its solution in the entire domain as integrals.
We propose a novel boundary integral network to learn the domain-independent Green's function, referred to as BIN-G.
We demonstrate that our numerical scheme enables fast training and accurate evaluation of the Green's function for PDEs with variable coefficients.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Green's function characterizes a partial differential equation (PDE) and maps
its solution in the entire domain as integrals. Finding the analytical form of
Green's function is a non-trivial exercise, especially for a PDE defined on a
complex domain or a PDE with variable coefficients. In this paper, we propose a
novel boundary integral network to learn the domain-independent Green's
function, referred to as BIN-G. We evaluate the Green's function in the BIN-G
using a radial basis function (RBF) kernel-based neural network. We train the
BIN-G by minimizing the residual of the PDE and the mean squared errors of the
solutions to the boundary integral equations for prescribed test functions. By
leveraging the symmetry of the Green's function and controlling refinements of
the RBF kernel near the singularity of the Green function, we demonstrate that
our numerical scheme enables fast training and accurate evaluation of the
Green's function for PDEs with variable coefficients. The learned Green's
function is independent of the domain geometries, forcing terms, and boundary
conditions in the boundary integral formulation. Numerical experiments verify
the desired properties of the method and the expected accuracy for the
two-dimensional Poisson and Helmholtz equations with variable coefficients.
Related papers
- Graph Fourier Neural Kernels (G-FuNK): Learning Solutions of Nonlinear Diffusive Parametric PDEs on Multiple Domains [2.8780581594260926]
We introduce a novel family of neural operators based on our Graph Fourier Neural Kernels.
G-FuNK combines components that are parameter- and domain-adapted with others that are not.
Experiments show G-FuNK's capability to accurately approximate heat, reaction diffusion, and cardiac electrophysiology equations.
arXiv Detail & Related papers (2024-10-06T23:55:34Z) - Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - A Hybrid Kernel-Free Boundary Integral Method with Operator Learning for Solving Parametric Partial Differential Equations In Complex Domains [0.0]
Kernel-Free Boundary Integral (KFBI) method presents an iterative solution to boundary integral equations arising from elliptic partial differential equations (PDEs)
We propose a hybrid KFBI method, integrating the foundational principles of the KFBI method with the capabilities of deep learning.
arXiv Detail & Related papers (2024-04-23T17:25:35Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Data-driven discovery of Green's functions [0.0]
This thesis introduces theoretical results and deep learning algorithms to learn Green's functions associated with linear partial differential equations.
The construction connects the fields of PDE learning and numerical linear algebra.
Rational neural networks (NNs) are introduced and consist of neural networks with trainable rational activation functions.
arXiv Detail & Related papers (2022-10-28T09:41:50Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - BI-GreenNet: Learning Green's functions by boundary integral network [14.008606361378149]
Green's function plays a significant role in both theoretical analysis and numerical computing of partial differential equations.
We develop a new method for computing Green's function with high accuracy.
arXiv Detail & Related papers (2022-04-28T01:42:35Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Exact formulas of the end-to-end Green's functions in non-Hermitian
systems [0.0]
Green's function in non-Hermitian systems can be capable of directional amplification.
We derive exact formulas for the end-to-end Green's functions of single-band systems.
arXiv Detail & Related papers (2021-09-07T12:36:42Z) - Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization [106.70006655990176]
A distributional optimization problem arises widely in machine learning and statistics.
We propose a novel particle-based algorithm, dubbed as variational transport, which approximately performs Wasserstein gradient descent.
We prove that when the objective function satisfies a functional version of the Polyak-Lojasiewicz (PL) (Polyak, 1963) and smoothness conditions, variational transport converges linearly.
arXiv Detail & Related papers (2020-12-21T18:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.