Numerical Approximation of Partial Differential Equations by a Variable
Projection Method with Artificial Neural Networks
- URL: http://arxiv.org/abs/2201.09989v1
- Date: Mon, 24 Jan 2022 22:31:38 GMT
- Title: Numerical Approximation of Partial Differential Equations by a Variable
Projection Method with Artificial Neural Networks
- Authors: Suchuan Dong, Jielin Yang
- Abstract summary: We present a method for solving linear and nonlinear PDEs based on the variable projection (VarPro) framework and artificial neural networks (ANNs)
For linear PDEs, enforcing the boundary/initial value problem on the collocation points leads to a separable nonlinear least squares problem about the network coefficients.
We reformulate this problem by the VarPro approach to eliminate the linear output-layer coefficients, leading to a reduced problem about the hidden-layer coefficients only.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method for solving linear and nonlinear PDEs based on the
variable projection (VarPro) framework and artificial neural networks (ANN).
For linear PDEs, enforcing the boundary/initial value problem on the
collocation points leads to a separable nonlinear least squares problem about
the network coefficients. We reformulate this problem by the VarPro approach to
eliminate the linear output-layer coefficients, leading to a reduced problem
about the hidden-layer coefficients only. The reduced problem is solved first
by the nonlinear least squares method to determine the hidden-layer
coefficients, and then the output-layer coefficients are computed by the linear
least squares method. For nonlinear PDEs, enforcing the boundary/initial value
problem on the collocation points leads to a nonlinear least squares problem
that is not separable, which precludes the VarPro strategy for such problems.
To enable the VarPro approach for nonlinear PDEs, we first linearize the
problem with a Newton iteration, using a particular form of linearization. The
linearized system is solved by the VarPro framework together with ANNs. Upon
convergence of the Newton iteration, the network coefficients provide the
representation of the solution field to the original nonlinear problem. We
present ample numerical examples with linear and nonlinear PDEs to demonstrate
the performance of the method herein. For smooth field solutions, the errors of
the current method decrease exponentially as the number of collocation points
or the number of output-layer coefficients increases. We compare the current
method with the ELM method from a previous work. Under identical conditions and
network configurations, the current method exhibits an accuracy significantly
superior to the ELM method.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - A Structure-Guided Gauss-Newton Method for Shallow ReLU Neural Network [18.06366638807982]
We propose a structure-guided Gauss-Newton (SgGN) method for solving least squares problems using a shallow ReLU neural network.
The method effectively takes advantage of both the least squares structure and the neural network structure of the objective function.
arXiv Detail & Related papers (2024-04-07T20:24:44Z) - An Extreme Learning Machine-Based Method for Computational PDEs in
Higher Dimensions [1.2981626828414923]
We present two effective methods for solving high-dimensional partial differential equations (PDE) based on randomized neural networks.
We present ample numerical simulations for a number of high-dimensional linear/nonlinear stationary/dynamic PDEs to demonstrate their performance.
arXiv Detail & Related papers (2023-09-13T15:59:02Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Least-Squares ReLU Neural Network (LSNN) Method For Linear
Advection-Reaction Equation [3.6525914200522656]
This paper studies least-squares ReLU neural network method for solving the linear advection-reaction problem with discontinuous solution.
The method is capable of approximating the discontinuous interface of the underlying problem automatically through the free hyper-planes of the ReLU neural network.
arXiv Detail & Related papers (2021-05-25T03:13:15Z) - Least-Squares ReLU Neural Network (LSNN) Method For Scalar Nonlinear
Hyperbolic Conservation Law [3.6525914200522656]
We introduce the least-squares ReLU neural network (LSNN) method for solving the linear advection-reaction problem with discontinuous solution.
We show that the method outperforms mesh-based numerical methods in terms of the number of degrees of freedom.
arXiv Detail & Related papers (2021-05-25T02:59:48Z) - Solving and Learning Nonlinear PDEs with Gaussian Processes [11.09729362243947]
We introduce a simple, rigorous, and unified framework for solving nonlinear partial differential equations.
The proposed approach provides a natural generalization of collocation kernel methods to nonlinear PDEs and IPs.
For IPs, while the traditional approach has been to iterate between the identifications of parameters in the PDE and the numerical approximation of its solution, our algorithm tackles both simultaneously.
arXiv Detail & Related papers (2021-03-24T03:16:08Z) - Hybrid Trilinear and Bilinear Programming for Aligning Partially
Overlapping Point Sets [85.71360365315128]
In many applications, we need algorithms which can align partially overlapping point sets are invariant to the corresponding corresponding RPM algorithm.
We first show that the objective is a cubic bound function. We then utilize the convex envelopes of trilinear and bilinear monomial transformations to derive its lower bound.
We next develop a branch-and-bound (BnB) algorithm which only branches over the transformation variables and runs efficiently.
arXiv Detail & Related papers (2021-01-19T04:24:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.