Translating Numerical Concepts for PDEs into Neural Architectures
- URL: http://arxiv.org/abs/2103.15419v1
- Date: Mon, 29 Mar 2021 08:31:51 GMT
- Title: Translating Numerical Concepts for PDEs into Neural Architectures
- Authors: Tobias Alt, Pascal Peter, Joachim Weickert, Karl Schrader
- Abstract summary: We investigate what can be learned from translating numerical algorithms into neural networks.
On the numerical side, we consider explicit, accelerated explicit, and implicit schemes for a general higher order nonlinear diffusion equation in 1D.
On the neural network side, we identify corresponding concepts in terms of residual networks (ResNets), recurrent networks, and U-nets.
- Score: 9.460896836770534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate what can be learned from translating numerical algorithms into
neural networks. On the numerical side, we consider explicit, accelerated
explicit, and implicit schemes for a general higher order nonlinear diffusion
equation in 1D, as well as linear multigrid methods. On the neural network
side, we identify corresponding concepts in terms of residual networks
(ResNets), recurrent networks, and U-nets. These connections guarantee
Euclidean stability of specific ResNets with a transposed convolution layer
structure in each block. We present three numerical justifications for skip
connections: as time discretisations in explicit schemes, as extrapolation
mechanisms for accelerating those methods, and as recurrent connections in
fixed point solvers for implicit schemes. Last but not least, we also motivate
uncommon design choices such as nonmonotone activation functions. Our findings
give a numerical perspective on the success of modern neural network
architectures, and they provide design criteria for stable networks.
Related papers
- Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Implicit regularization of deep residual networks towards neural ODEs [8.075122862553359]
We establish an implicit regularization of deep residual networks towards neural ODEs.
We prove that if the network is as a discretization of a neural ODE, then such a discretization holds throughout training.
arXiv Detail & Related papers (2023-09-03T16:35:59Z) - Predictions Based on Pixel Data: Insights from PDEs and Finite Differences [0.0]
This paper deals with approximation of time sequences where each observation is a matrix.
We show that with relatively small networks, we can represent exactly a class of numerical discretizations of PDEs based on the method of lines.
Our network architecture is inspired by those typically adopted in the approximation of time sequences.
arXiv Detail & Related papers (2023-05-01T08:54:45Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Connections between Numerical Algorithms for PDEs and Neural Networks [8.660429288575369]
We investigate numerous structural connections between numerical algorithms for partial differential equations (PDEs) and neural networks.
Our goal is to transfer the rich set of mathematical foundations from the world of PDEs to neural networks.
arXiv Detail & Related papers (2021-07-30T16:42:45Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - SPINN: Sparse, Physics-based, and Interpretable Neural Networks for PDEs [0.0]
We introduce a class of Sparse, Physics-based, and Interpretable Neural Networks (SPINN) for solving ordinary and partial differential equations.
By reinterpreting a traditional meshless representation of solutions of PDEs as a special sparse deep neural network, we develop a class of sparse neural network architectures that are interpretable.
arXiv Detail & Related papers (2021-02-25T17:45:50Z) - On reaction network implementations of neural networks [0.0]
This paper is concerned with the utilization of deterministically modeled chemical reaction networks for the implementation of (feed-forward) neural networks.
We develop a general mathematical framework and prove that the ordinary differential equations (ODEs) associated with certain reaction network implementations of neural networks have desirable properties.
arXiv Detail & Related papers (2020-10-26T02:37:26Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.