Multi-scale Deep Neural Network (MscaleDNN) for Solving
Poisson-Boltzmann Equation in Complex Domains
- URL: http://arxiv.org/abs/2007.11207v3
- Date: Mon, 28 Sep 2020 07:50:51 GMT
- Title: Multi-scale Deep Neural Network (MscaleDNN) for Solving
Poisson-Boltzmann Equation in Complex Domains
- Authors: Ziqi Liu, Wei Cai, Zhi-Qin John Xu
- Abstract summary: We propose multi-scale deep neural networks (MscaleDNNs) using the idea of radial scaling in frequency domain and activation functions with compact support.
As a result, the MscaleDNNs achieve fast uniform convergence over multiple scales.
- Score: 12.09637784919702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose multi-scale deep neural networks (MscaleDNNs) using
the idea of radial scaling in frequency domain and activation functions with
compact support. The radial scaling converts the problem of approximation of
high frequency contents of PDEs' solutions to a problem of learning about lower
frequency functions, and the compact support activation functions facilitate
the separation of frequency contents of the target function to be approximated
by corresponding DNNs. As a result, the MscaleDNNs achieve fast uniform
convergence over multiple scales. The proposed MscaleDNNs are shown to be
superior to traditional fully connected DNNs and be an effective mesh-less
numerical method for Poisson-Boltzmann equations with ample frequency contents
over complex and singular domains.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Frequency-adaptive Multi-scale Deep Neural Networks [6.338572283139395]
We propose frequency-adaptive MscaleDNNs for approximating high frequency functions.
These MscaleDNNs improve accuracy by two to three orders of magnitude compared to standard MscaleDNNs.
arXiv Detail & Related papers (2024-09-28T14:49:23Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Deep Neural Network Solutions for Oscillatory Fredholm Integral
Equations [12.102640617194025]
We develop a numerical method for solving the equation with DNNs as an approximate solution.
We then propose a multi-grade deep learning (MGDL) model to overcome the spectral bias issue of neural networks.
arXiv Detail & Related papers (2024-01-13T07:26:47Z) - Blending Neural Operators and Relaxation Methods in PDE Numerical Solvers [3.2712166248850685]
HINTS is a hybrid, iterative, numerical, and transferable solver for partial differential equations.
It balances the convergence behavior across the spectrum of eigenmodes by utilizing the spectral bias of DeepONet.
It is flexible with regards to discretizations, computational domain, and boundary conditions.
arXiv Detail & Related papers (2022-08-28T19:07:54Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Subspace Decomposition based DNN algorithm for elliptic-type multi-scale
PDEs [19.500646313633446]
We construct a subspace decomposition based DNN (dubbed SD$2$NN) architecture for a class of multi-scale problems.
A novel trigonometric activation function is incorporated in the SD$2$NN model.
Numerical results show that the SD$2$NN model is superior to existing models such as MscaleDNN.
arXiv Detail & Related papers (2021-12-10T08:26:27Z) - PINNup: Robust neural network wavefield solutions using frequency
upscaling and neuron splitting [0.0]
We propose a novel implementation of PINN using frequency upscaling and neuron splitting.
The proposed PINN exhibits notable superiority in terms of convergence and accuracy.
It can achieve neuron based high-frequency wavefield solutions with a two-hidden-layer model.
arXiv Detail & Related papers (2021-09-29T16:35:50Z) - Learning to Beamform in Heterogeneous Massive MIMO Networks [48.62625893368218]
It is well-known problem of finding the optimal beamformers in massive multiple-input multiple-output (MIMO) networks.
We propose a novel deep learning based paper algorithm to address this problem.
arXiv Detail & Related papers (2020-11-08T12:48:06Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.