Inverse Problem of Nonlinear Schr\"odinger Equation as Learning of
Convolutional Neural Network
- URL: http://arxiv.org/abs/2107.08593v1
- Date: Mon, 19 Jul 2021 02:54:37 GMT
- Title: Inverse Problem of Nonlinear Schr\"odinger Equation as Learning of
Convolutional Neural Network
- Authors: Yiran Wang, Zhen Li
- Abstract summary: It is shown that one can obtain a relatively accurate estimate of the considered parameters using the proposed method.
It provides a natural framework in inverse problems of partial differential equations with deep learning.
- Score: 5.676923179244324
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this work, we use an explainable convolutional neural network (NLS-Net) to
solve an inverse problem of the nonlinear Schr\"odinger equation, which is
widely used in fiber-optic communications. The landscape and minimizers of the
non-convex loss function of the learning problem are studied empirically. It
provides a guidance for choosing hyper-parameters of the method. The estimation
error of the optimal solution is discussed in terms of expressive power of the
NLS-Net and data. Besides, we compare the performance of several training
algorithms that are popular in deep learning. It is shown that one can obtain a
relatively accurate estimate of the considered parameters using the proposed
method. The study provides a natural framework of solving inverse problems of
nonlinear partial differential equations with deep learning.
Related papers
- Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Newton Informed Neural Operator for Computing Multiple Solutions of Nonlinear Partials Differential Equations [3.8916312075738273]
We propose a novel approach called the Newton Informed Neural Operator to tackle nonlinearities.
Our method combines classical Newton methods, addressing well-posed problems, and efficiently learns multiple solutions in a single learning process.
arXiv Detail & Related papers (2024-05-23T01:52:54Z) - Physics-informed Neural Networks approach to solve the Blasius function [0.0]
This paper presents a physics-informed neural network (PINN) approach to solve the Blasius function.
It is seen that this method produces results that are at par with the numerical and conventional methods.
arXiv Detail & Related papers (2022-12-31T03:14:42Z) - NeuralEF: Deconstructing Kernels by Deep Neural Networks [47.54733625351363]
Traditional nonparametric solutions based on the Nystr"om formula suffer from scalability issues.
Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions.
We show that these problems can be fixed by using a new series of objective functions that generalizes to space of supervised and unsupervised learning problems.
arXiv Detail & Related papers (2022-04-30T05:31:07Z) - A deep branching solver for fully nonlinear partial differential
equations [0.1474723404975345]
We present a multidimensional deep learning implementation of a branching algorithm for the numerical solution of fully nonlinear PDEs.
This approach is designed to tackle functional nonlinearities involving gradient terms of any orders.
arXiv Detail & Related papers (2022-03-07T09:46:46Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z) - Deep neural network for solving differential equations motivated by
Legendre-Galerkin approximation [16.64525769134209]
We explore the performance and accuracy of various neural architectures on both linear and nonlinear differential equations.
We implement a novel Legendre-Galerkin Deep Neural Network (LGNet) algorithm to predict solutions to differential equations.
arXiv Detail & Related papers (2020-10-24T20:25:09Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.