Deep learning of thermodynamics-aware reduced-order models from data
- URL: http://arxiv.org/abs/2007.03758v2
- Date: Tue, 9 Mar 2021 17:51:00 GMT
- Title: Deep learning of thermodynamics-aware reduced-order models from data
- Authors: Quercus Hernandez, Alberto Badias, David Gonzalez, Francisco Chinesta,
Elias Cueto
- Abstract summary: We present an algorithm to learn the relevant latent variables of a large-scale discretized physical system.
We then predict its time evolution using thermodynamically-consistent deep neural networks.
- Score: 0.08699280339422537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an algorithm to learn the relevant latent variables of a
large-scale discretized physical system and predict its time evolution using
thermodynamically-consistent deep neural networks. Our method relies on sparse
autoencoders, which reduce the dimensionality of the full order model to a set
of sparse latent variables with no prior knowledge of the coded space
dimensionality. Then, a second neural network is trained to learn the
metriplectic structure of those reduced physical variables and predict its time
evolution with a so-called structure-preserving neural network. This data-based
integrator is guaranteed to conserve the total energy of the system and the
entropy inequality, and can be applied to both conservative and dissipative
systems. The integrated paths can then be decoded to the original
full-dimensional manifold and be compared to the ground truth solution. This
method is tested with two examples applied to fluid and solid mechanics.
Related papers
- Balanced Neural ODEs: nonlinear model order reduction and Koopman operator approximations [0.0]
Variational Autoencoders (VAEs) are a powerful framework for learning compact latent representations.
NeuralODEs excel in learning transient system dynamics.
This work combines the strengths of both to create fast surrogate models with adjustable complexity.
arXiv Detail & Related papers (2024-10-14T05:45:52Z) - Neural Incremental Data Assimilation [8.817223931520381]
We introduce a deep learning approach where the physical system is modeled as a sequence of coarse-to-fine Gaussian prior distributions parametrized by a neural network.
This allows us to define an assimilation operator, which is trained in an end-to-end fashion to minimize the reconstruction error.
We illustrate our approach on chaotic dynamical physical systems with sparse observations, and compare it to traditional variational data assimilation methods.
arXiv Detail & Related papers (2024-06-21T11:42:55Z) - tLaSDI: Thermodynamics-informed latent space dynamics identification [0.0]
We propose a latent space dynamics identification method, namely tLa, that embeds the first and second principles of thermodynamics.
The latent variables are learned through an autoencoder as a nonlinear dimension reduction model.
An intriguing correlation is empirically observed between a quantity from tLa in the latent space and the behaviors of the full-state solution.
arXiv Detail & Related papers (2024-03-09T09:17:23Z) - Thermodynamics-informed super-resolution of scarce temporal dynamics data [4.893345190925178]
We present a method to increase the resolution of measurements of a physical system and subsequently predict its time evolution.
Our method uses adversarial autoencoders, which reduce the dimensionality of the full order model to a set of latent variables that are enforced to match a prior.
A second neural network is trained to learn the physical structure of the latent variables and predict their temporal evolution.
arXiv Detail & Related papers (2024-02-27T13:46:45Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Phase space learning with neural networks [0.0]
This work proposes an autoencoder neural network as a non-linear generalization of projection-based methods for solving Partial Differential Equations (PDEs)
The proposed deep learning architecture is capable of generating the dynamics of PDEs by integrating them completely in a very reduced latent space without intermediate reconstructions, to then decode the latent solution back to the original space.
It is shown the reliability of properly regularized neural networks to learn the global characteristics of a dynamical system's phase space from the sample data of a single path, as well as its ability to predict unseen bifurcations.
arXiv Detail & Related papers (2020-06-22T20:28:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.