Deep neural networks for smooth approximation of physics with higher
order and continuity B-spline base functions
- URL: http://arxiv.org/abs/2201.00904v1
- Date: Mon, 3 Jan 2022 23:02:39 GMT
- Title: Deep neural networks for smooth approximation of physics with higher
order and continuity B-spline base functions
- Authors: Kamil Doleg{\l}o, Anna Paszy\'nska, Maciej Paszy\'nski, Leszek
Demkowicz
- Abstract summary: Traditionally, the neural network employs non-linear activation functions to approximate a given physical phenomenon.
We present an alternative approach, where the physcial quantity is approximated as a linear combination of smooth B-spline basis functions.
We show that our approach is cheaper and more accurate when approximating physical fields.
- Score: 0.4588028371034407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper deals with the following important research question.
Traditionally, the neural network employs non-linear activation functions
concatenated with linear operators to approximate a given physical phenomenon.
They "fill the space" with the concatenations of the activation functions and
linear operators and adjust their coefficients to approximate the physical
phenomena. We claim that it is better to "fill the space" with linear
combinations of smooth higher-order B-splines base functions as employed by
isogeometric analysis and utilize the neural networks to adjust the
coefficients of linear combinations. In other words, the possibilities of using
neural networks for approximating the B-spline base functions' coefficients and
by approximating the solution directly are evaluated. Solving differential
equations with neural networks has been proposed by Maziar Raissi et al. in
2017 by introducing Physics-informed Neural Networks (PINN), which naturally
encode underlying physical laws as prior information. Approximation of
coefficients using a function as an input leverages the well-known capability
of neural networks being universal function approximators. In essence, in the
PINN approach the network approximates the value of the given field at a given
point. We present an alternative approach, where the physcial quantity is
approximated as a linear combination of smooth B-spline basis functions, and
the neural network approximates the coefficients of B-splines. This research
compares results from the DNN approximating the coefficients of the linear
combination of B-spline basis functions, with the DNN approximating the
solution directly. We show that our approach is cheaper and more accurate when
approximating smooth physical fields.
Related papers
- Physics-informed neural wavefields with Gabor basis functions [4.07926531936425]
We propose an approach to enhance the efficiency and accuracy of neural network wavefield solutions.
Specifically, for the Helmholtz equation, we augment the fully connected neural network model with an Gabor layer constituting the final hidden layer.
These/coefficients of the Gabor functions are learned from the previous hidden layers that include nonlinear activation functions.
arXiv Detail & Related papers (2023-10-16T17:30:33Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Do deep neural networks have an inbuilt Occam's razor? [1.1470070927586016]
We show that structured data combined with an intrinsic Occam's razor-like inductive bias towards simple functions counteracts the exponential growth of functions with complexity.
This analysis reveals that structured data, combined with an intrinsic Occam's razor-like inductive bias towards (Kolmogorov) simple functions that is strong enough to counteract the exponential growth of functions with complexity, is a key to the success of DNNs.
arXiv Detail & Related papers (2023-04-13T16:58:21Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - Tensor-based framework for training flexible neural networks [9.176056742068813]
We propose a new learning algorithm which solves a constrained coupled matrix-tensor factorization (CMTF) problem.
The proposed algorithm can handle different bases decomposition.
The goal of this method is to compress large pretrained NN models, by replacing tensorworks, em i.e., one or multiple layers of the original network, by a new flexible layer.
arXiv Detail & Related papers (2021-06-25T10:26:48Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.