Subspace Decomposition based DNN algorithm for elliptic-type multi-scale
PDEs
- URL: http://arxiv.org/abs/2112.06660v1
- Date: Fri, 10 Dec 2021 08:26:27 GMT
- Title: Subspace Decomposition based DNN algorithm for elliptic-type multi-scale
PDEs
- Authors: Xi-An Li, Zhi-Qin John Xu and Lei Zhang
- Abstract summary: We construct a subspace decomposition based DNN (dubbed SD$2$NN) architecture for a class of multi-scale problems.
A novel trigonometric activation function is incorporated in the SD$2$NN model.
Numerical results show that the SD$2$NN model is superior to existing models such as MscaleDNN.
- Score: 19.500646313633446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While deep learning algorithms demonstrate a great potential in scientific
computing, its application to multi-scale problems remains to be a big
challenge. This is manifested by the "frequency principle" that neural networks
tend to learn low frequency components first. Novel architectures such as
multi-scale deep neural network (MscaleDNN) were proposed to alleviate this
problem to some extent. In this paper, we construct a subspace decomposition
based DNN (dubbed SD$^2$NN) architecture for a class of multi-scale problems by
combining traditional numerical analysis ideas and MscaleDNN algorithms. The
proposed architecture includes one low frequency normal DNN submodule, and one
(or a few) high frequency MscaleDNN submodule(s), which are designed to capture
the smooth part and the oscillatory part of the multi-scale solutions,
respectively. In addition, a novel trigonometric activation function is
incorporated in the SD$^2$NN model. We demonstrate the performance of the
SD$^2$NN architecture through several benchmark multi-scale problems in regular
or irregular geometric domains. Numerical results show that the SD$^2$NN model
is superior to existing models such as MscaleDNN.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Deep Neural Network Solutions for Oscillatory Fredholm Integral
Equations [12.102640617194025]
We develop a numerical method for solving the equation with DNNs as an approximate solution.
We then propose a multi-grade deep learning (MGDL) model to overcome the spectral bias issue of neural networks.
arXiv Detail & Related papers (2024-01-13T07:26:47Z) - Stacked tensorial neural networks for reduced-order modeling of a
parametric partial differential equation [0.0]
I describe a deep neural network architecture that fuses multiple TNNs into a larger network.
I evaluate this architecture on a parametric PDE with three independent variables and three parameters.
arXiv Detail & Related papers (2023-12-21T21:44:50Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Multilevel Bayesian Deep Neural Networks [0.5892638927736115]
We consider inference associated with deep neural networks (DNNs) and in particular, trace-class neural network (TNN) priors.
TNN priors are defined on functions with infinitely many hidden units, and have strongly convergent approximations with finitely many hidden units.
In this paper, we leverage the strong convergence of TNN in order to apply Multilevel Monte Carlo (MLMC) to these models.
arXiv Detail & Related papers (2022-03-24T09:49:27Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Joint Deep Reinforcement Learning and Unfolding: Beam Selection and
Precoding for mmWave Multiuser MIMO with Lens Arrays [54.43962058166702]
millimeter wave (mmWave) multiuser multiple-input multiple-output (MU-MIMO) systems with discrete lens arrays have received great attention.
In this work, we investigate the joint design of a beam precoding matrix for mmWave MU-MIMO systems with DLA.
arXiv Detail & Related papers (2021-01-05T03:55:04Z) - Multi-scale Deep Neural Network (MscaleDNN) for Solving
Poisson-Boltzmann Equation in Complex Domains [12.09637784919702]
We propose multi-scale deep neural networks (MscaleDNNs) using the idea of radial scaling in frequency domain and activation functions with compact support.
As a result, the MscaleDNNs achieve fast uniform convergence over multiple scales.
arXiv Detail & Related papers (2020-07-22T05:28:03Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Fractional Deep Neural Network via Constrained Optimization [0.0]
This paper introduces a novel algorithmic framework for a deep neural network (DNN)
Fractional-DNN can be viewed as a time-discretization of a fractional in time nonlinear ordinary differential equation (ODE)
arXiv Detail & Related papers (2020-04-01T21:58:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.