Deep learning soliton dynamics and complex potentials recognition for 1D
and 2D PT-symmetric saturable nonlinear Schr\"odinger equations
- URL: http://arxiv.org/abs/2310.02276v1
- Date: Fri, 29 Sep 2023 14:49:24 GMT
- Title: Deep learning soliton dynamics and complex potentials recognition for 1D
and 2D PT-symmetric saturable nonlinear Schr\"odinger equations
- Authors: Jin Song, Zhenya Yan
- Abstract summary: We extend the physics-informed neural networks (PINNs) to learn data-driven stationary and non-stationary solitons of 1D and 2D saturable nonlinear Schr"odinger equations.
- Score: 0.43512163406552
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we firstly extend the physics-informed neural networks (PINNs)
to learn data-driven stationary and non-stationary solitons of 1D and 2D
saturable nonlinear Schr\"odinger equations (SNLSEs) with two fundamental
PT-symmetric Scarf-II and periodic potentials in optical fibers. Secondly, the
data-driven inverse problems are studied for PT-symmetric potential functions
discovery rather than just potential parameters in the 1D and 2D SNLSEs.
Particularly, we propose a modified PINNs (mPINNs) scheme to identify directly
the PT potential functions of the 1D and 2D SNLSEs by the solution data. And
the inverse problems about 1D and 2D PT -symmetric potentials depending on
propagation distance z are also investigated using mPINNs method. We also
identify the potential functions by the PINNs applied to the stationary
equation of the SNLSE. Furthermore, two network structures are compared under
different parameter conditions such that the predicted PT potentials can
achieve the similar high accuracy. These results illustrate that the
established deep neural networks can be successfully used in 1D and 2D SNLSEs
with high accuracies. Moreover, some main factors affecting neural networks
performance are discussed in 1D and 2D PT Scarf-II and periodic potentials,
including activation functions, structures of the networks, and sizes of the
training data. In particular, twelve different nonlinear activation functions
are in detail analyzed containing the periodic and non-periodic functions such
that it is concluded that selecting activation functions according to the form
of solution and equation usually can achieve better effect.
Related papers
- Physics-informed machine learning of redox flow battery based on a
two-dimensional unit cell model [1.8147447763965252]
We present a physics-informed neural network (PINN) approach for predicting the performance of an all-vanadium redox flow battery.
Our numerical results show that the PINN is able to predict cell voltage correctly, but the prediction of potentials shows a constant-like shift.
arXiv Detail & Related papers (2023-05-31T22:06:30Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Exploring Linear Feature Disentanglement For Neural Networks [63.20827189693117]
Non-linear activation functions, e.g., Sigmoid, ReLU, and Tanh, have achieved great success in neural networks (NNs)
Due to the complex non-linear characteristic of samples, the objective of those activation functions is to project samples from their original feature space to a linear separable feature space.
This phenomenon ignites our interest in exploring whether all features need to be transformed by all non-linear functions in current typical NNs.
arXiv Detail & Related papers (2022-03-22T13:09:17Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Learning Functional Priors and Posteriors from Data and Physics [3.537267195871802]
We develop a new framework based on deep neural networks to be able to extrapolate in space-time using historical data.
We employ the physics-informed Generative Adversarial Networks (PI-GAN) to learn a functional prior.
At the second stage, we employ the Hamiltonian Monte Carlo (HMC) method to estimate the posterior in the latent space of PI-GANs.
arXiv Detail & Related papers (2021-06-08T03:03:24Z) - Symmetry-via-Duality: Invariant Neural Network Densities from
Parameter-Space Correlators [0.0]
symmetries of network densities may be determined via dual computations of network correlation functions.
We demonstrate that the amount of symmetry in the initial density affects the accuracy of networks trained on Fashion-MNIST.
arXiv Detail & Related papers (2021-06-01T18:00:06Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - Data-driven rogue waves and parameter discovery in the defocusing NLS
equation with a potential using the PINN deep learning [7.400475825464313]
We use the multi-layer PINN deep learning method to study the data-driven rogue wave solutions of the defocusing nonlinear Schr"odinger (NLS) equation with the time-dependent potential.
Results will be useful to further discuss the rogue wave solutions of the defocusing NLS equation with a potential in the study of deep learning neural networks.
arXiv Detail & Related papers (2020-12-18T00:09:21Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.