PINNup: Robust neural network wavefield solutions using frequency
upscaling and neuron splitting
- URL: http://arxiv.org/abs/2109.14536v1
- Date: Wed, 29 Sep 2021 16:35:50 GMT
- Title: PINNup: Robust neural network wavefield solutions using frequency
upscaling and neuron splitting
- Authors: Xinquan Huang, Tariq Alkhalifah
- Abstract summary: We propose a novel implementation of PINN using frequency upscaling and neuron splitting.
The proposed PINN exhibits notable superiority in terms of convergence and accuracy.
It can achieve neuron based high-frequency wavefield solutions with a two-hidden-layer model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving for the frequency-domain scattered wavefield via physics-informed
neural network (PINN) has great potential in seismic modeling and inversion.
However, when dealing with high-frequency wavefields, its accuracy and training
cost limits its applications. Thus, we propose a novel implementation of PINN
using frequency upscaling and neuron splitting, which allows the neural network
model to grow in size as we increase the frequency while leveraging the
information from the pre-trained model for lower-frequency wavefields,
resulting in fast convergence to high-accuracy solutions. Numerical results
show that, compared to the commonly used PINN with random initialization, the
proposed PINN exhibits notable superiority in terms of convergence and accuracy
and can achieve neuron based high-frequency wavefield solutions with a
two-hidden-layer model.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Physics-informed neural wavefields with Gabor basis functions [4.07926531936425]
We propose an approach to enhance the efficiency and accuracy of neural network wavefield solutions.
Specifically, for the Helmholtz equation, we augment the fully connected neural network model with an Gabor layer constituting the final hidden layer.
These/coefficients of the Gabor functions are learned from the previous hidden layers that include nonlinear activation functions.
arXiv Detail & Related papers (2023-10-16T17:30:33Z) - Data-driven localized waves and parameter discovery in the massive
Thirring model via extended physics-informed neural networks with interface
zones [3.522950356329991]
We study data-driven localized wave solutions and parameter discovery in the massive Thirring (MT) model via the deep learning.
For higher-order localized wave solutions, we employ the extended PINNs (XPINNs) with domain decomposition.
Experimental results show that this improved version of XPINNs reduce the complexity of computation with faster convergence rate.
arXiv Detail & Related papers (2023-09-29T13:50:32Z) - GaborPINN: Efficient physics informed neural networks using
multiplicative filtered networks [0.0]
Physics-informed neural networks (PINNs) provide functional wavefield solutions represented by neural networks (NNs)
We propose a modified PINN using multiplicative filtered networks, which embeds some of the known characteristics of the wavefield in training.
The proposed method achieves up to a two-magnitude increase in the speed of convergence as compared with conventional PINNs.
arXiv Detail & Related papers (2023-08-10T19:51:00Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Solving Seismic Wave Equations on Variable Velocity Models with Fourier
Neural Operator [3.2307366446033945]
We propose a new framework paralleled Fourier neural operator (PFNO) for efficiently training the FNO-based solver.
Numerical experiments demonstrate the high accuracy of both FNO and PFNO with complicated velocity models.
PFNO admits higher computational efficiency on large-scale testing datasets, compared with the traditional finite-difference method.
arXiv Detail & Related papers (2022-09-25T22:25:57Z) - Wave simulation in non-smooth media by PINN with quadratic neural
network and PML condition [2.7651063843287718]
The recently proposed physics-informed neural network (PINN) has achieved successful applications in solving a wide range of partial differential equations (PDEs)
In this paper, we solve the acoustic and visco-acoustic scattered-field wave equation in the frequency domain with PINN instead of the wave equation to remove source perturbation.
We show that PML and quadratic neurons improve the results as well as attenuation and discuss the reason for this improvement.
arXiv Detail & Related papers (2022-08-16T13:29:01Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by
Spiking Neural Network [68.43026108936029]
We propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment.
We implement this algorithm in a real-time robotic system with a microphone array.
The experiment results show a mean error azimuth of 13 degrees, which surpasses the accuracy of the other biologically plausible neuromorphic approach for sound source localization.
arXiv Detail & Related papers (2020-07-07T08:22:56Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.