Physics-Informed Long Short-Term Memory for Forecasting and
Reconstruction of Chaos
- URL: http://arxiv.org/abs/2302.10779v1
- Date: Fri, 3 Feb 2023 18:27:59 GMT
- Title: Physics-Informed Long Short-Term Memory for Forecasting and
Reconstruction of Chaos
- Authors: Elise \"Ozalp, Georgios Margazoglou, Luca Magri
- Abstract summary: We present the Physics-Informed Long Short-Term Memory (PI-LSTM) network to reconstruct and predict the evolution of unmeasured variables in a chaotic system.
The training is constrained by a regularization term, which penalizes solutions that violate the system's governing equations.
This work opens up new opportunities for state reconstruction and learning of the dynamics of nonlinear systems.
- Score: 5.8010446129208155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present the Physics-Informed Long Short-Term Memory (PI-LSTM) network to
reconstruct and predict the evolution of unmeasured variables in a chaotic
system. The training is constrained by a regularization term, which penalizes
solutions that violate the system's governing equations. The network is
showcased on the Lorenz-96 model, a prototypical chaotic dynamical system, for
a varying number of variables to reconstruct. First, we show the PI-LSTM
architecture and explain how to constrain the differential equations, which is
a non-trivial task in LSTMs. Second, the PI-LSTM is numerically evaluated in
the long-term autonomous evolution to study its ergodic properties. We show
that it correctly predicts the statistics of the unmeasured variables, which
cannot be achieved without the physical constraint. Third, we compute the
Lyapunov exponents of the network to infer the key stability properties of the
chaotic system. For reconstruction purposes, adding the physics-informed loss
qualitatively enhances the dynamical behaviour of the network, compared to a
data-driven only training. This is quantified by the agreement of the Lyapunov
exponents. This work opens up new opportunities for state reconstruction and
learning of the dynamics of nonlinear systems.
Related papers
- Physics-Informed Machine Learning for Seismic Response Prediction OF Nonlinear Steel Moment Resisting Frame Structures [6.483318568088176]
PiML method integrates scientific principles and physical laws into deep neural networks to model seismic responses of nonlinear structures.
Manipulating the equation of motion helps learn system nonlinearities and confines solutions within physically interpretable results.
Result handles complex data better than existing physics-guided LSTM models and outperforms other non-physics data-driven networks.
arXiv Detail & Related papers (2024-02-28T02:16:03Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Reconstruction, forecasting, and stability of chaotic dynamics from
partial data [4.266376725904727]
We propose data-driven methods to infer the dynamics of hidden chaotic variables from partial observations.
We show that the proposed networks can forecast the hidden variables, both time-accurately and statistically.
This work opens new opportunities for reconstructing the full state, inferring hidden variables, and computing the stability of chaotic systems from partial data.
arXiv Detail & Related papers (2023-05-24T13:01:51Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Time-Reversal Symmetric ODE Network [138.02741983098454]
Time-reversal symmetry is a fundamental property that frequently holds in classical and quantum mechanics.
We propose a novel loss function that measures how well our ordinary differential equation (ODE) networks comply with this time-reversal symmetry.
We show that, even for systems that do not possess the full time-reversal symmetry, TRS-ODENs can achieve better predictive performances over baselines.
arXiv Detail & Related papers (2020-07-22T12:19:40Z) - DynNet: Physics-based neural architecture design for linear and
nonlinear structural response modeling and prediction [2.572404739180802]
In this study, a physics-based recurrent neural network model is designed that is able to learn the dynamics of linear and nonlinear multiple degrees of freedom systems.
The model is able to estimate a complete set of responses, including displacement, velocity, acceleration, and internal forces.
arXiv Detail & Related papers (2020-07-03T17:05:35Z) - Lipschitz Recurrent Neural Networks [100.72827570987992]
We show that our Lipschitz recurrent unit is more robust with respect to input and parameter perturbations as compared to other continuous-time RNNs.
Our experiments demonstrate that the Lipschitz RNN can outperform existing recurrent units on a range of benchmark tasks.
arXiv Detail & Related papers (2020-06-22T08:44:52Z) - Entanglement-Embedded Recurrent Network Architecture: Tensorized Latent
State Propagation and Chaos Forecasting [0.0]
Chaotic time series forecasting has been far less understood.
Traditional statistical/ML methods are inefficient to capture chaos in nonlinear dynamical systems.
We introduce a new long-term-memory (LSTM)-based recurrent architecture.
arXiv Detail & Related papers (2020-06-10T23:03:33Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.