Thermodynamics-informed super-resolution of scarce temporal dynamics data
- URL: http://arxiv.org/abs/2402.17506v2
- Date: Wed, 3 Jul 2024 12:13:21 GMT
- Title: Thermodynamics-informed super-resolution of scarce temporal dynamics data
- Authors: Carlos Bermejo-Barbanoj, Beatriz Moya, Alberto Badías, Francisco Chinesta, Elías Cueto,
- Abstract summary: We present a method to increase the resolution of measurements of a physical system and subsequently predict its time evolution.
Our method uses adversarial autoencoders, which reduce the dimensionality of the full order model to a set of latent variables that are enforced to match a prior.
A second neural network is trained to learn the physical structure of the latent variables and predict their temporal evolution.
- Score: 4.893345190925178
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a method to increase the resolution of measurements of a physical system and subsequently predict its time evolution using thermodynamics-aware neural networks. Our method uses adversarial autoencoders, which reduce the dimensionality of the full order model to a set of latent variables that are enforced to match a prior, for example a normal distribution. Adversarial autoencoders are seen as generative models, and they can be trained to generate high-resolution samples from low-resoution inputs, meaning they can address the so-called super-resolution problem. Then, a second neural network is trained to learn the physical structure of the latent variables and predict their temporal evolution. This neural network is known as an structure-preserving neural network. It learns the metriplectic-structure of the system and applies a physical bias to ensure that the first and second principles of thermodynamics are fulfilled. The integrated trajectories are decoded to their original dimensionality, as well as to the higher dimensionality space produced by the adversarial autoencoder and they are compared to the ground truth solution. The method is tested with two examples of flow over a cylinder, where the fluid properties are varied between both examples.
Related papers
- Neural Incremental Data Assimilation [8.817223931520381]
We introduce a deep learning approach where the physical system is modeled as a sequence of coarse-to-fine Gaussian prior distributions parametrized by a neural network.
This allows us to define an assimilation operator, which is trained in an end-to-end fashion to minimize the reconstruction error.
We illustrate our approach on chaotic dynamical physical systems with sparse observations, and compare it to traditional variational data assimilation methods.
arXiv Detail & Related papers (2024-06-21T11:42:55Z) - tLaSDI: Thermodynamics-informed latent space dynamics identification [0.0]
We propose a latent space dynamics identification method, namely tLa, that embeds the first and second principles of thermodynamics.
The latent variables are learned through an autoencoder as a nonlinear dimension reduction model.
An intriguing correlation is empirically observed between a quantity from tLa in the latent space and the behaviors of the full-state solution.
arXiv Detail & Related papers (2024-03-09T09:17:23Z) - Assessing Neural Network Representations During Training Using
Noise-Resilient Diffusion Spectral Entropy [55.014926694758195]
Entropy and mutual information in neural networks provide rich information on the learning process.
We leverage data geometry to access the underlying manifold and reliably compute these information-theoretic measures.
We show that they form noise-resistant measures of intrinsic dimensionality and relationship strength in high-dimensional simulated data.
arXiv Detail & Related papers (2023-12-04T01:32:42Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Path sampling of recurrent neural networks by incorporating known
physics [0.0]
We show a path sampling approach that allows us to include generic thermodynamic or kinetic constraints into recurrent neural networks.
We show the method here for a widely used type of recurrent neural network known as long short-term memory network.
Our method can be easily generalized to other generative artificial intelligence models and to generic time series in different areas of physical and social sciences.
arXiv Detail & Related papers (2022-03-01T16:35:50Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Deep learning of thermodynamics-aware reduced-order models from data [0.08699280339422537]
We present an algorithm to learn the relevant latent variables of a large-scale discretized physical system.
We then predict its time evolution using thermodynamically-consistent deep neural networks.
arXiv Detail & Related papers (2020-07-03T08:49:01Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.