Multi-scale Deep Neural Network (MscaleDNN) for Solving
Poisson-Boltzmann Equation in Complex Domains
- URL: http://arxiv.org/abs/2007.11207v3
- Date: Mon, 28 Sep 2020 07:50:51 GMT
- Title: Multi-scale Deep Neural Network (MscaleDNN) for Solving
Poisson-Boltzmann Equation in Complex Domains
- Authors: Ziqi Liu, Wei Cai, Zhi-Qin John Xu
- Abstract summary: We propose multi-scale deep neural networks (MscaleDNNs) using the idea of radial scaling in frequency domain and activation functions with compact support.
As a result, the MscaleDNNs achieve fast uniform convergence over multiple scales.
- Score: 12.09637784919702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose multi-scale deep neural networks (MscaleDNNs) using
the idea of radial scaling in frequency domain and activation functions with
compact support. The radial scaling converts the problem of approximation of
high frequency contents of PDEs' solutions to a problem of learning about lower
frequency functions, and the compact support activation functions facilitate
the separation of frequency contents of the target function to be approximated
by corresponding DNNs. As a result, the MscaleDNNs achieve fast uniform
convergence over multiple scales. The proposed MscaleDNNs are shown to be
superior to traditional fully connected DNNs and be an effective mesh-less
numerical method for Poisson-Boltzmann equations with ample frequency contents
over complex and singular domains.
Related papers
- Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Deep Neural Network Solutions for Oscillatory Fredholm Integral
Equations [12.102640617194025]
We develop a numerical method for solving the equation with DNNs as an approximate solution.
We then propose a multi-grade deep learning (MGDL) model to overcome the spectral bias issue of neural networks.
arXiv Detail & Related papers (2024-01-13T07:26:47Z) - Multi-Grid Tensorized Fourier Neural Operator for High-Resolution PDEs [93.82811501035569]
We introduce a new data efficient and highly parallelizable operator learning approach with reduced memory requirement and better generalization.
MG-TFNO scales to large resolutions by leveraging local and global structures of full-scale, real-world phenomena.
We demonstrate superior performance on the turbulent Navier-Stokes equations where we achieve less than half the error with over 150x compression.
arXiv Detail & Related papers (2023-09-29T20:18:52Z) - Sparse Deep Neural Network for Nonlinear Partial Differential Equations [3.0069322256338906]
This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations.
We develop deep neural networks (DNNs) with a sparse regularization with multiple parameters to represent functions having certain singularities.
Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.
arXiv Detail & Related papers (2022-07-27T03:12:16Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Subspace Decomposition based DNN algorithm for elliptic-type multi-scale
PDEs [19.500646313633446]
We construct a subspace decomposition based DNN (dubbed SD$2$NN) architecture for a class of multi-scale problems.
A novel trigonometric activation function is incorporated in the SD$2$NN model.
Numerical results show that the SD$2$NN model is superior to existing models such as MscaleDNN.
arXiv Detail & Related papers (2021-12-10T08:26:27Z) - PINNup: Robust neural network wavefield solutions using frequency
upscaling and neuron splitting [0.0]
We propose a novel implementation of PINN using frequency upscaling and neuron splitting.
The proposed PINN exhibits notable superiority in terms of convergence and accuracy.
It can achieve neuron based high-frequency wavefield solutions with a two-hidden-layer model.
arXiv Detail & Related papers (2021-09-29T16:35:50Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - Learning to Beamform in Heterogeneous Massive MIMO Networks [48.62625893368218]
It is well-known problem of finding the optimal beamformers in massive multiple-input multiple-output (MIMO) networks.
We propose a novel deep learning based paper algorithm to address this problem.
arXiv Detail & Related papers (2020-11-08T12:48:06Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.