Quadratic Residual Networks: A New Class of Neural Networks for Solving
Forward and Inverse Problems in Physics Involving PDEs
- URL: http://arxiv.org/abs/2101.08366v2
- Date: Thu, 28 Jan 2021 01:51:50 GMT
- Title: Quadratic Residual Networks: A New Class of Neural Networks for Solving
Forward and Inverse Problems in Physics Involving PDEs
- Authors: Jie Bu, Anuj Karpatne
- Abstract summary: quadratic residual networks (QRes) are a new type of parameter-efficient neural network architecture.
We show that QRes is especially powerful for solving forward and inverse physics problems involving partial differential equations (PDEs)
We empirically show that QRes shows faster convergence speed in terms of number of training epochs especially in learning complex patterns.
- Score: 2.8718027848324317
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We propose quadratic residual networks (QRes) as a new type of
parameter-efficient neural network architecture, by adding a quadratic residual
term to the weighted sum of inputs before applying activation functions. With
sufficiently high functional capacity (or expressive power), we show that it is
especially powerful for solving forward and inverse physics problems involving
partial differential equations (PDEs). Using tools from algebraic geometry, we
theoretically demonstrate that, in contrast to plain neural networks, QRes
shows better parameter efficiency in terms of network width and depth thanks to
higher non-linearity in every neuron. Finally, we empirically show that QRes
shows faster convergence speed in terms of number of training epochs especially
in learning complex patterns.
Related papers
- GradINN: Gradient Informed Neural Network [2.287415292857564]
We propose a methodology inspired by Physics Informed Neural Networks (PINNs)
GradINNs leverage prior beliefs about a system's gradient to constrain the predicted function's gradient across all input dimensions.
We demonstrate the advantages of GradINNs, particularly in low-data regimes, on diverse problems spanning non time-dependent systems.
arXiv Detail & Related papers (2024-09-03T14:03:29Z) - Grad-Shafranov equilibria via data-free physics informed neural networks [0.0]
We show that PINNs can accurately and effectively solve the Grad-Shafranov equation with several different boundary conditions.
We introduce a parameterized PINN framework, expanding the input space to include variables such as pressure, aspect ratio, elongation, and triangularity.
arXiv Detail & Related papers (2023-11-22T16:08:38Z) - HyperLoRA for PDEs [7.898728380447954]
Physics-informed neural networks (PINNs) have been widely used to develop neural surrogates for solutions of Partial Differential Equations.
A drawback of PINNs is that they have to be retrained with every change in initial-boundary conditions and PDE coefficients.
The Hypernetwork, a model-based meta learning technique, takes in a parameterized task embedding as input and predicts the weights of PINN as output.
arXiv Detail & Related papers (2023-08-18T04:29:48Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Scaling Properties of Deep Residual Networks [2.6763498831034043]
We investigate the properties of weights trained by gradient descent and their scaling with network depth through numerical experiments.
We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature.
These findings cast doubts on the validity of the neural ODE model as an adequate description of deep ResNets.
arXiv Detail & Related papers (2021-05-25T22:31:30Z) - Physics-informed attention-based neural network for solving non-linear
partial differential equations [6.103365780339364]
Physics-Informed Neural Networks (PINNs) have enabled significant improvements in modelling physical processes.
PINNs are based on simple architectures, and learn the behavior of complex physical systems by optimizing the network parameters to minimize the residual of the underlying PDE.
Here, we address the question of which network architectures are best suited to learn the complex behavior of non-linear PDEs.
arXiv Detail & Related papers (2021-05-17T14:29:08Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.