Fiber Transmission Model with Parameterized Inputs based on GPT-PINN Neural Network
- URL: http://arxiv.org/abs/2408.09947v1
- Date: Mon, 19 Aug 2024 12:37:15 GMT
- Title: Fiber Transmission Model with Parameterized Inputs based on GPT-PINN Neural Network
- Authors: Yubin Zang, Boyu Hua, Zhipeng Lin, Fangzheng Zhang, Simin Li, Zuxing Zhang, Hongwei Chen,
- Abstract summary: novelty principle driven fiber transmission model for short-distance transmission is put forward.
Tasks of on-off keying signals with bit rates ranging from 2Gbps to 50Gbps are adopted to demonstrate the fidelity of the model.
- Score: 5.687110567253701
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this manuscript, a novelty principle driven fiber transmission model for short-distance transmission with parameterized inputs is put forward. By taking into the account of the previously proposed principle driven fiber model, the reduced basis expansion method and transforming the parameterized inputs into parameterized coefficients of the Nonlinear Schrodinger Equations, universal solutions with respect to inputs corresponding to different bit rates can all be obtained without the need of re-training the whole model. This model, once adopted, can have prominent advantages in both computation efficiency and physical background. Besides, this model can still be effectively trained without the needs of transmitted signals collected in advance. Tasks of on-off keying signals with bit rates ranging from 2Gbps to 50Gbps are adopted to demonstrate the fidelity of the model.
Related papers
- Learning from Scratch: Structurally-masked Transformer for Next Generation Lib-free Simulation [3.7467132954493536]
This paper proposes a neural framework for power and timing prediction of multi-stage data path.<n>To the best of our knowledge, this is the first language-based, netlist-aware neural network designed explicitly for standard cells.
arXiv Detail & Related papers (2025-07-23T10:46:25Z) - Neural Network-Based Parameter Estimation for Non-Autonomous Differential Equations with Discontinuous Signals [0.0]
We propose a novel parameter estimation method utilizing functional approximations with artificial neural networks.<n>Our approach, termed Harmonic Approximation of Discontinuous External Signals using Neural Networks (HADES-NN), operates in two iterated stages.
arXiv Detail & Related papers (2025-07-08T00:42:42Z) - Fusing Global and Local: Transformer-CNN Synergy for Next-Gen Current Estimation [4.945568106952893]
This paper presents a hybrid model combining Transformer and CNN for predicting the current waveform in signal lines.
It replaces the complex Newton iteration process used in traditional SPICE simulations, leveraging the powerful sequence modeling capabilities of the Transformer framework.
Experimental results demonstrate that, compared to traditional SPICE simulations, the proposed algorithm achieves an error of only 0.0098.
arXiv Detail & Related papers (2025-04-08T19:42:10Z) - Principle Driven Parameterized Fiber Model based on GPT-PINN Neural Network [5.452279754228114]
We propose the principle driven parameterized fiber model in this manuscript.
This model breaks down the predicted NLSE solution with respect to one set of transmission condition into the linear combination of several eigen solutions.
Not only strong physical interpretability can the model posses, but also higher computing efficiency can be obtained.
arXiv Detail & Related papers (2024-08-19T12:44:00Z) - TGPT-PINN: Nonlinear model reduction with transformed GPT-PINNs [1.6093211760643649]
We introduce the Transformed Generative Pre-Trained Physics-Informed Neural Networks (TGPT-PINN)
TGPT-PINN is a network-of-networks design achieving snapshot-based model reduction.
We demonstrate this new capability for nonlinear model reduction in the PINNs framework by several non-trivial partial differential equations.
arXiv Detail & Related papers (2024-03-06T04:49:18Z) - Improving Transferability of Adversarial Examples via Bayesian Attacks [84.90830931076901]
We introduce a novel extension by incorporating the Bayesian formulation into the model input as well, enabling the joint diversification of both the model input and model parameters.
Our method achieves a new state-of-the-art on transfer-based attacks, improving the average success rate on ImageNet and CIFAR-10 by 19.14% and 2.08%, respectively.
arXiv Detail & Related papers (2023-07-21T03:43:07Z) - Joint Bayesian Inference of Graphical Structure and Parameters with a
Single Generative Flow Network [59.79008107609297]
We propose in this paper to approximate the joint posterior over the structure of a Bayesian Network.
We use a single GFlowNet whose sampling policy follows a two-phase process.
Since the parameters are included in the posterior distribution, this leaves more flexibility for the local probability models.
arXiv Detail & Related papers (2023-05-30T19:16:44Z) - On feedforward control using physics-guided neural networks: Training
cost regularization and optimized initialization [0.0]
Performance of model-based feedforward controllers is typically limited by the accuracy of the inverse system dynamics model.
This paper proposes a regularization method via identified physical parameters.
It is validated on a real-life industrial linear motor, where it delivers better tracking accuracy and extrapolation.
arXiv Detail & Related papers (2022-01-28T12:51:25Z) - Physics-constrained deep neural network method for estimating parameters
in a redox flow battery [68.8204255655161]
We present a physics-constrained deep neural network (PCDNN) method for parameter estimation in the zero-dimensional (0D) model of the vanadium flow battery (VRFB)
We show that the PCDNN method can estimate model parameters for a range of operating conditions and improve the 0D model prediction of voltage.
We also demonstrate that the PCDNN approach has an improved generalization ability for estimating parameter values for operating conditions not used in the training.
arXiv Detail & Related papers (2021-06-21T23:42:58Z) - Adaptive conversion of real-valued input into spike trains [91.3755431537592]
This paper presents a biologically plausible method for converting real-valued input into spike trains for processing with spiking neural networks.
The proposed method mimics the adaptive behaviour of retinal ganglion cells and allows input neurons to adapt their response to changes in the statistics of the input.
arXiv Detail & Related papers (2021-04-12T12:33:52Z) - NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity [28.201670958962453]
Normalizing flows (NFs) have become a prominent method for deep generative models that allow for an analytic probability density estimation and efficient synthesis.
We present an alternative parameterization scheme called NanoFlow, which uses a single neural density estimator to model multiple transformation stages.
arXiv Detail & Related papers (2020-06-11T09:35:00Z) - Flexible Transmitter Network [84.90891046882213]
Current neural networks are mostly built upon the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons.
We propose the Flexible Transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity.
We present the Flexible Transmitter Network (FTNet), which is built on the most common fully-connected feed-forward architecture.
arXiv Detail & Related papers (2020-04-08T06:55:12Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.