RAR-PINN algorithm for the data-driven vector-soliton solutions and
parameter discovery of coupled nonlinear equations
- URL: http://arxiv.org/abs/2205.10230v1
- Date: Fri, 29 Apr 2022 12:34:33 GMT
- Title: RAR-PINN algorithm for the data-driven vector-soliton solutions and
parameter discovery of coupled nonlinear equations
- Authors: Shu-Mei Qin, Min Li, Tao Xu, Shao-Qun Dong
- Abstract summary: This work aims to provide an effective deep learning framework to predict the vector-soliton solutions of the coupled nonlinear equations and their interactions.
The method we propose here is a physics-informed neural network (PINN) combining with the residual-based adaptive refinement (RAR-PINN) algorithm.
- Score: 6.340205794719235
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work aims to provide an effective deep learning framework to predict the
vector-soliton solutions of the coupled nonlinear equations and their
interactions. The method we propose here is a physics-informed neural network
(PINN) combining with the residual-based adaptive refinement (RAR-PINN)
algorithm. Different from the traditional PINN algorithm which takes points
randomly, the RAR-PINN algorithm uses an adaptive point-fetching approach to
improve the training efficiency for the solutions with steep gradients. A
series of experiment comparisons between the RAR-PINN and traditional PINN
algorithms are implemented to a coupled generalized nonlinear Schr\"{o}dinger
(CGNLS) equation as an example. The results indicate that the RAR-PINN
algorithm has faster convergence rate and better approximation ability,
especially in modeling the shape-changing vector-soliton interactions in the
coupled systems. Finally, the RAR-PINN method is applied to perform the
data-driven discovery of the CGNLS equation, which shows the dispersion and
nonlinear coefficients can be well approximated.
Related papers
- Physics-informed neural networks for high-dimensional solutions and snaking bifurcations in nonlinear lattices [0.0]
This paper introduces a framework based on physics-informed neural networks (PINNs) for addressing key challenges in nonlinear lattices.<n>We first employ PINNs to approximate solutions of nonlinear systems arising from lattice models, using the Levenberg-Marquardt algorithm.<n>We then extend the method by coupling PINNs with a continuation approach to compute snaking bifurcation diagrams.<n>For linear stability analysis, we adapt PINNs to compute eigenvectors, introducing output constraints to enforce positivity, in line with Sturm-Liouville theory.
arXiv Detail & Related papers (2025-07-13T20:41:55Z) - Accelerating Natural Gradient Descent for PINNs with Randomized Numerical Linear Algebra [0.0]
Natural Gradient Descent (NGD) has emerged as a promising optimization algorithm for training neural network-based solvers for partial differential equations (PDEs)<n>We extend matrix-free NGD to broader classes of problems than previously considered and propose the use of Randomized Nystr"om preconditioning to accelerate convergence of the inner CG solver.<n>The resulting algorithm demonstrates substantial performance improvements over existing NGD-based methods on a range of PDE problems discretized using neural networks.
arXiv Detail & Related papers (2025-05-16T19:00:40Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Learning solutions of parametric Navier-Stokes with physics-informed
neural networks [0.3989223013441816]
We leverageformed-Informed Neural Networks (PINs) to learn solution functions of parametric Navier-Stokes equations (NSE)
We consider the parameter(s) of interest as inputs of PINs along with coordinates, and train PINs on numerical solutions of parametric-PDES for instances of the parameters.
We show that our proposed approach results in optimizing PINN models that learn the solution functions while making sure that flow predictions are in line with conservational laws of mass and momentum.
arXiv Detail & Related papers (2024-02-05T16:19:53Z) - HNS: An Efficient Hermite Neural Solver for Solving Time-Fractional
Partial Differential Equations [12.520882780496738]
We present the high-precision Hermite Neural Solver (HNS) for solving time-fractional partial differential equations.
The experimental results show that HNS has significantly improved accuracy and flexibility compared to existing L1-based methods.
arXiv Detail & Related papers (2023-10-07T12:44:47Z) - Efficient Interpretable Nonlinear Modeling for Multiple Time Series [5.448070998907116]
This paper proposes an efficient nonlinear modeling approach for multiple time series.
It incorporates nonlinear interactions among different time-series variables.
Experimental results show that the proposed algorithm improves the identification of the support of the VAR coefficients in a parsimonious manner.
arXiv Detail & Related papers (2023-09-29T11:42:59Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - A deep branching solver for fully nonlinear partial differential
equations [0.1474723404975345]
We present a multidimensional deep learning implementation of a branching algorithm for the numerical solution of fully nonlinear PDEs.
This approach is designed to tackle functional nonlinearities involving gradient terms of any orders.
arXiv Detail & Related papers (2022-03-07T09:46:46Z) - Inverse Problem of Nonlinear Schr\"odinger Equation as Learning of
Convolutional Neural Network [5.676923179244324]
It is shown that one can obtain a relatively accurate estimate of the considered parameters using the proposed method.
It provides a natural framework in inverse problems of partial differential equations with deep learning.
arXiv Detail & Related papers (2021-07-19T02:54:37Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.