Physics-informed Neural Network for Nonlinear Dynamics in Fiber Optics
- URL: http://arxiv.org/abs/2109.00526v1
- Date: Wed, 1 Sep 2021 12:19:32 GMT
- Title: Physics-informed Neural Network for Nonlinear Dynamics in Fiber Optics
- Authors: Xiaotian Jiang, Danshi Wang, Qirui Fan, Min Zhang, Chao Lu, and Alan
Pak Tao Lau
- Abstract summary: A physics-informed neural network (PINN) that combines deep learning with physics is studied to solve the nonlinear Schr"odinger equation for learning nonlinear dynamics in fiber optics.
PINN is not only an effective partial differential equation solver, but also a prospective technique to advance the scientific computing and automatic modeling in fiber optics.
- Score: 10.335960060544904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A physics-informed neural network (PINN) that combines deep learning with
physics is studied to solve the nonlinear Schr\"odinger equation for learning
nonlinear dynamics in fiber optics. We carry out a systematic investigation and
comprehensive verification on PINN for multiple physical effects in optical
fibers, including dispersion, self-phase modulation, and higher-order nonlinear
effects. Moreover, both special case (soliton propagation) and general case
(multi-pulse propagation) are investigated and realized with PINN. In the
previous studies, the PINN was mainly effective for single scenario. To
overcome this problem, the physical parameters (pulse peak power and amplitudes
of sub-pulses) are hereby embedded as additional input parameter controllers,
which allow PINN to learn the physical constraints of different scenarios and
perform good generalizability. Furthermore, PINN exhibits better performance
than the data-driven neural network using much less data, and its computational
complexity (in terms of number of multiplications) is much lower than that of
the split-step Fourier method. The results report here show that the PINN is
not only an effective partial differential equation solver, but also a
prospective technique to advance the scientific computing and automatic
modeling in fiber optics.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Spectral Informed Neural Network: An Efficient and Low-Memory PINN [3.8534287291074354]
We propose a spectral-based neural network that substitutes the differential operator with a multiplication.
Compared to the PINNs, our approach requires lower memory and shorter training time.
We provide two strategies to train networks by their spectral information.
arXiv Detail & Related papers (2024-08-29T10:21:00Z) - Nonlinear Schrödinger Network [0.8249694498830558]
Deep neural networks (DNNs) have achieved exceptional performance across various fields by learning complex nonlinear mappings from large-scale datasets.
To address these issues, hybrid approaches that integrate physics with AI are gaining interest.
This paper introduces a novel physics-based AI model called the "Nonlinear Schr"odinger Network"
arXiv Detail & Related papers (2024-07-19T17:58:00Z) - A Dimension-Augmented Physics-Informed Neural Network (DaPINN) with High
Level Accuracy and Efficiency [0.20391237204597357]
Physics-informed neural networks (PINNs) have been widely applied in different fields.
We propose a novel dimension-augmented physics-informed neural network (DaPINN)
DaPINN simultaneously and significantly improves the accuracy and efficiency of the PINN.
arXiv Detail & Related papers (2022-10-19T15:54:37Z) - Physics-aware Differentiable Discrete Codesign for Diffractive Optical
Neural Networks [12.952987240366781]
This work proposes a novel device-to-system hardware-software codesign framework, which enables efficient training of Diffractive optical neural networks (DONNs)
Gumbel-Softmax is employed to enable differentiable discrete mapping from real-world device parameters into the forward function of DONNs.
The results have demonstrated that our proposed framework offers significant advantages over conventional quantization-based methods.
arXiv Detail & Related papers (2022-09-28T17:13:28Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - On the eigenvector bias of Fourier feature networks: From regression to
solving multi-scale PDEs with physics-informed neural networks [0.0]
We show that neural networks (PINNs) struggle in cases where the target functions to be approximated exhibit high-frequency or multi-scale features.
We construct novel architectures that employ multi-scale random observational features and justify how such coordinate embedding layers can lead to robust and accurate PINN models.
arXiv Detail & Related papers (2020-12-18T04:19:30Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.