Revisiting the double-well problem by deep learning with a hybrid
network
- URL: http://arxiv.org/abs/2104.14657v1
- Date: Sun, 25 Apr 2021 07:51:43 GMT
- Title: Revisiting the double-well problem by deep learning with a hybrid
network
- Authors: Shurui Li, Jianqin Xu and Jing Qian
- Abstract summary: We propose a novel hybrid network which integrates two different kinds of neural networks: LSTM and ResNet.
Such a hybrid network can be applied for solving cooperative dynamics in a system with fast spatial or temporal modulations.
- Score: 7.308730248177914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving physical problems by deep learning is accurate and efficient mainly
accounting for the use of an elaborate neural network. We propose a novel
hybrid network which integrates two different kinds of neural networks: LSTM
and ResNet, in order to overcome the difficulty met in solving
strongly-oscillating dynamics of the system's time evolution. By taking the
double-well model as an example we show that our new method can benefit from a
pre-learning and verification of the periodicity of frequency by using the LSTM
network, simultaneously making a high-fidelity prediction about the whole
dynamics of system with ResNet, which is impossibly achieved in the case of
single network. Such a hybrid network can be applied for solving cooperative
dynamics in a system with fast spatial or temporal modulations, promising for
realistic oscillation calculations under experimental conditions.
Related papers
- Message Passing Variational Autoregressive Network for Solving Intractable Ising Models [6.261096199903392]
Many deep neural networks have been used to solve Ising models, including autoregressive neural networks, convolutional neural networks, recurrent neural networks, and graph neural networks.
Here we propose a variational autoregressive architecture with a message passing mechanism, which can effectively utilize the interactions between spin variables.
The new network trained under an annealing framework outperforms existing methods in solving several prototypical Ising spin Hamiltonians, especially for larger spin systems at low temperatures.
arXiv Detail & Related papers (2024-04-09T11:27:07Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - On the effectiveness of neural priors in modeling dynamical systems [28.69155113611877]
We discuss the architectural regularization that neural networks offer when learning such systems.
We show that simple coordinate networks with few layers can be used to solve multiple problems in modelling dynamical systems.
arXiv Detail & Related papers (2023-03-10T06:21:24Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Learning Fast and Slow for Online Time Series Forecasting [76.50127663309604]
Fast and Slow learning Networks (FSNet) is a holistic framework for online time-series forecasting.
FSNet balances fast adaptation to recent changes and retrieving similar old knowledge.
Our code will be made publicly available.
arXiv Detail & Related papers (2022-02-23T18:23:07Z) - Learn to Communicate with Neural Calibration: Scalability and
Generalization [10.775558382613077]
We propose a scalable and generalizable neural calibration framework for future wireless system design.
The proposed neural calibration framework is applied to solve challenging resource management problems in massive multiple-input multiple-output (MIMO) systems.
arXiv Detail & Related papers (2021-10-01T09:00:25Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Hybrid Backpropagation Parallel Reservoir Networks [8.944918753413827]
We propose a novel hybrid network, which combines the effectiveness of learning random temporal features of reservoirs with the readout power of a deep neural network with batch normalization.
We demonstrate that our new network outperforms LSTMs and GRUs, including multi-layer "deep" versions of these networks.
We show also that the inclusion of a novel meta-ring structure, which we call HBP-ESN M-Ring, achieves similar performance to one large reservoir while decreasing the memory required by an order of magnitude.
arXiv Detail & Related papers (2020-10-27T21:03:35Z) - A Principle of Least Action for the Training of Neural Networks [10.342408668490975]
We show the presence of a low kinetic energy displacement bias in the transport map of the network, and link this bias with generalization performance.
We propose a new learning algorithm, which automatically adapts to the complexity of the given task, and leads to networks with a high generalization ability even in low data regimes.
arXiv Detail & Related papers (2020-09-17T15:37:34Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.