Least-Squares ReLU Neural Network (LSNN) Method For Scalar Nonlinear
Hyperbolic Conservation Law
- URL: http://arxiv.org/abs/2105.11627v1
- Date: Tue, 25 May 2021 02:59:48 GMT
- Title: Least-Squares ReLU Neural Network (LSNN) Method For Scalar Nonlinear
Hyperbolic Conservation Law
- Authors: Zhiqiang Cai, Jingshuang Chen, Min Liu
- Abstract summary: We introduce the least-squares ReLU neural network (LSNN) method for solving the linear advection-reaction problem with discontinuous solution.
We show that the method outperforms mesh-based numerical methods in terms of the number of degrees of freedom.
- Score: 3.6525914200522656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduced the least-squares ReLU neural network (LSNN) method for solving
the linear advection-reaction problem with discontinuous solution and showed
that the method outperforms mesh-based numerical methods in terms of the number
of degrees of freedom. This paper studies the LSNN method for scalar nonlinear
hyperbolic conservation law. The method is a discretization of an equivalent
least-squares (LS) formulation in the set of neural network functions with the
ReLU activation function. Evaluation of the LS functional is done by using
numerical integration and conservative finite volume scheme. Numerical results
of some test problems show that the method is capable of approximating the
discontinuous interface of the underlying problem automatically through the
free breaking lines of the ReLU neural network. Moreover, the method does not
exhibit the common Gibbs phenomena along the discontinuous interface.
Related papers
- A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Physics-informed Neural Networks approach to solve the Blasius function [0.0]
This paper presents a physics-informed neural network (PINN) approach to solve the Blasius function.
It is seen that this method produces results that are at par with the numerical and conventional methods.
arXiv Detail & Related papers (2022-12-31T03:14:42Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - An application of the splitting-up method for the computation of a
neural network representation for the solution for the filtering equations [68.8204255655161]
Filtering equations play a central role in many real-life applications, including numerical weather prediction, finance and engineering.
One of the classical approaches to approximate the solution of the filtering equations is to use a PDE inspired method, called the splitting-up method.
We combine this method with a neural network representation to produce an approximation of the unnormalised conditional distribution of the signal process.
arXiv Detail & Related papers (2022-01-10T11:01:36Z) - Least-Squares Neural Network (LSNN) Method For Scalar Nonlinear
Hyperbolic Conservation Laws: Discrete Divergence Operator [4.3226069572849966]
A least-squares neural network (LSNN) method was introduced for solving scalar linear hyperbolic conservation laws.
This paper rewrites HCLs in their divergence form of space time time introduces a new discrete divergence operator.
arXiv Detail & Related papers (2021-10-21T04:50:57Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Inverse Problem of Nonlinear Schr\"odinger Equation as Learning of
Convolutional Neural Network [5.676923179244324]
It is shown that one can obtain a relatively accurate estimate of the considered parameters using the proposed method.
It provides a natural framework in inverse problems of partial differential equations with deep learning.
arXiv Detail & Related papers (2021-07-19T02:54:37Z) - Least-Squares ReLU Neural Network (LSNN) Method For Linear
Advection-Reaction Equation [3.6525914200522656]
This paper studies least-squares ReLU neural network method for solving the linear advection-reaction problem with discontinuous solution.
The method is capable of approximating the discontinuous interface of the underlying problem automatically through the free hyper-planes of the ReLU neural network.
arXiv Detail & Related papers (2021-05-25T03:13:15Z) - Local Extreme Learning Machines and Domain Decomposition for Solving
Linear and Nonlinear Partial Differential Equations [0.0]
We present a neural network-based method for solving linear and nonlinear partial differential equations.
The method combines the ideas of extreme learning machines (ELM), domain decomposition and local neural networks.
We compare the current method with the deep Galerkin method (DGM) and the physics-informed neural network (PINN) in terms of the accuracy and computational cost.
arXiv Detail & Related papers (2020-12-04T23:19:39Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.