Fixed-kinetic Neural Hamiltonian Flows for enhanced interpretability and
reduced complexity
- URL: http://arxiv.org/abs/2302.01955v1
- Date: Fri, 3 Feb 2023 19:05:57 GMT
- Title: Fixed-kinetic Neural Hamiltonian Flows for enhanced interpretability and
reduced complexity
- Authors: Vincent Souveton, Arnaud Guillin, Jens Jasche, Guilhem Lavaux, Manon
Michel
- Abstract summary: We introduce a fixed kinetic energy version of the Neural Hamiltonian Flows (NHF) model.
Inspired by physics, our approach improves interpretability and requires less parameters than previously proposed architectures.
We also adapt NHF to the context of Bayesian inference and illustrate our method on sampling the posterior distribution of two cosmological parameters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Normalizing Flows (NF) are Generative models which are particularly robust
and allow for exact sampling of the learned distribution. They however require
the design of an invertible mapping, whose Jacobian determinant has to be
computable. Recently introduced, Neural Hamiltonian Flows (NHF) are based on
Hamiltonian dynamics-based Flows, which are continuous, volume-preserving and
invertible and thus make for natural candidates for robust NF architectures. In
particular, their similarity to classical Mechanics could lead to easier
interpretability of the learned mapping. However, despite being
Physics-inspired architectures, the originally introduced NHF architecture
still poses a challenge to interpretability. For this reason, in this work, we
introduce a fixed kinetic energy version of the NHF model. Inspired by physics,
our approach improves interpretability and requires less parameters than
previously proposed architectures. We then study the robustness of the NHF
architectures to the choice of hyperparameters. We analyze the impact of the
number of leapfrog steps, the integration time and the number of neurons per
hidden layer, as well as the choice of prior distribution, on sampling a
multimodal 2D mixture. The NHF architecture is robust to these choices,
especially the fixed-kinetic energy model. Finally, we adapt NHF to the context
of Bayesian inference and illustrate our method on sampling the posterior
distribution of two cosmological parameters knowing type Ia supernovae
observations.
Related papers
- Latent Space Energy-based Neural ODEs [73.01344439786524]
This paper introduces a novel family of deep dynamical models designed to represent continuous-time sequence data.
We train the model using maximum likelihood estimation with Markov chain Monte Carlo.
Experiments on oscillating systems, videos and real-world state sequences (MuJoCo) illustrate that ODEs with the learnable energy-based prior outperform existing counterparts.
arXiv Detail & Related papers (2024-09-05T18:14:22Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Deep Neural Networks as Variational Solutions for Correlated Open
Quantum Systems [0.0]
We show that parametrizing the density matrix directly with more powerful models can yield better variational ansatz functions.
We present results for the dissipative one-dimensional transverse-field Ising model and a two-dimensional dissipative Heisenberg model.
arXiv Detail & Related papers (2024-01-25T13:41:34Z) - Applications of Machine Learning to Modelling and Analysing Dynamical
Systems [0.0]
We propose an architecture which combines existing Hamiltonian Neural Network structures into Adaptable Symplectic Recurrent Neural Networks.
This architecture is found to significantly outperform previously proposed neural networks when predicting Hamiltonian dynamics.
We show that this method works efficiently for single parameter potentials and provides accurate predictions even over long periods of time.
arXiv Detail & Related papers (2023-07-22T19:04:17Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - Positive-definite parametrization of mixed quantum states with deep
neural networks [0.0]
We show how to embed an autoregressive structure in the GHDO to allow direct sampling of the probability distribution.
We benchmark this architecture by the steady state of the dissipative transverse-field Ising model.
arXiv Detail & Related papers (2022-06-27T17:51:38Z) - Moser Flow: Divergence-based Generative Modeling on Manifolds [49.04974733536027]
Moser Flow (MF) is a new class of generative models within the family of continuous normalizing flows (CNF)
MF does not require invoking or backpropagating through an ODE solver during training.
We demonstrate for the first time the use of flow models for sampling from general curved surfaces.
arXiv Detail & Related papers (2021-08-18T09:00:24Z) - A unified framework for Hamiltonian deep neural networks [3.0934684265555052]
Training deep neural networks (DNNs) can be difficult due to vanishing/exploding gradients during weight optimization.
We propose a class of DNNs stemming from the time discretization of Hamiltonian systems.
The proposed Hamiltonian framework, besides encompassing existing networks inspired by marginally stable ODEs, allows one to derive new and more expressive architectures.
arXiv Detail & Related papers (2021-04-27T13:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.