Physics-informed neural wavefields with Gabor basis functions
- URL: http://arxiv.org/abs/2310.10602v1
- Date: Mon, 16 Oct 2023 17:30:33 GMT
- Title: Physics-informed neural wavefields with Gabor basis functions
- Authors: Tariq Alkhalifah and Xinquan Huang
- Abstract summary: We propose an approach to enhance the efficiency and accuracy of neural network wavefield solutions.
Specifically, for the Helmholtz equation, we augment the fully connected neural network model with an Gabor layer constituting the final hidden layer.
These/coefficients of the Gabor functions are learned from the previous hidden layers that include nonlinear activation functions.
- Score: 4.07926531936425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, Physics-Informed Neural Networks (PINNs) have gained significant
attention for their versatile interpolation capabilities in solving partial
differential equations (PDEs). Despite their potential, the training can be
computationally demanding, especially for intricate functions like wavefields.
This is primarily due to the neural-based (learned) basis functions, biased
toward low frequencies, as they are dominated by polynomial calculations, which
are not inherently wavefield-friendly. In response, we propose an approach to
enhance the efficiency and accuracy of neural network wavefield solutions by
modeling them as linear combinations of Gabor basis functions that satisfy the
wave equation. Specifically, for the Helmholtz equation, we augment the fully
connected neural network model with an adaptable Gabor layer constituting the
final hidden layer, employing a weighted summation of these Gabor neurons to
compute the predictions (output). These weights/coefficients of the Gabor
functions are learned from the previous hidden layers that include nonlinear
activation functions. To ensure the Gabor layer's utilization across the model
space, we incorporate a smaller auxiliary network to forecast the center of
each Gabor function based on input coordinates. Realistic assessments showcase
the efficacy of this novel implementation compared to the vanilla PINN,
particularly in scenarios involving high-frequencies and realistic models that
are often challenging for PINNs.
Related papers
- GaborPINN: Efficient physics informed neural networks using
multiplicative filtered networks [0.0]
Physics-informed neural networks (PINNs) provide functional wavefield solutions represented by neural networks (NNs)
We propose a modified PINN using multiplicative filtered networks, which embeds some of the known characteristics of the wavefield in training.
The proposed method achieves up to a two-magnitude increase in the speed of convergence as compared with conventional PINNs.
arXiv Detail & Related papers (2023-08-10T19:51:00Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Towards a Foundation Model for Neural Network Wavefunctions [5.145741425164946]
We propose a novel neural network ansatz, which maps uncorrelated, computationally cheap Hartree-Fock orbitals to correlated, high-accuracy neural network orbitals.
This ansatz is inherently capable of learning a single wavefunction across multiple compounds and geometries.
arXiv Detail & Related papers (2023-03-17T16:03:10Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Hierarchical Spherical CNNs with Lifting-based Adaptive Wavelets for
Pooling and Unpooling [101.72318949104627]
We propose a novel framework of hierarchical convolutional neural networks (HS-CNNs) with a lifting structure to learn adaptive spherical wavelets for pooling and unpooling.
LiftHS-CNN ensures a more efficient hierarchical feature learning for both image- and pixel-level tasks.
arXiv Detail & Related papers (2022-05-31T07:23:42Z) - Deep Neural Network Classifier for Multi-dimensional Functional Data [4.340040784481499]
We propose a new approach, called as functional deep neural network (FDNN), for classifying multi-dimensional functional data.
Specifically, a deep neural network is trained based on the principle components of the training data which shall be used to predict the class label of a future data function.
arXiv Detail & Related papers (2022-05-17T19:22:48Z) - Deep neural networks for smooth approximation of physics with higher
order and continuity B-spline base functions [0.4588028371034407]
Traditionally, the neural network employs non-linear activation functions to approximate a given physical phenomenon.
We present an alternative approach, where the physcial quantity is approximated as a linear combination of smooth B-spline basis functions.
We show that our approach is cheaper and more accurate when approximating physical fields.
arXiv Detail & Related papers (2022-01-03T23:02:39Z) - Going Beyond Linear RL: Sample Efficient Neural Function Approximation [76.57464214864756]
We study function approximation with two-layer neural networks.
Our results significantly improve upon what can be attained with linear (or eluder dimension) methods.
arXiv Detail & Related papers (2021-07-14T03:03:56Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.