Learning the Ising Model with Generative Neural Networks
- URL: http://arxiv.org/abs/2001.05361v2
- Date: Fri, 8 May 2020 22:37:06 GMT
- Title: Learning the Ising Model with Generative Neural Networks
- Authors: Francesco D'Angelo and Lucas B\"ottcher
- Abstract summary: We study the representational characteristics of Boltzmann machines (RBMs) and variational autoencoders (VAEs)
Our results suggest that the considered RBMs and convolutional VAEs are able to capture the temperature dependence of magnetization, energy, and spin-spin correlations.
We also find that convolutional layers in VAEs are important to model spin correlations whereas RBMs achieve similar or even better performances without convolutional filters.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning and neural networks have led to an increased
interest in the application of generative models in statistical and condensed
matter physics. In particular, restricted Boltzmann machines (RBMs) and
variational autoencoders (VAEs) as specific classes of neural networks have
been successfully applied in the context of physical feature extraction and
representation learning. Despite these successes, however, there is only
limited understanding of their representational properties and limitations. To
better understand the representational characteristics of RBMs and VAEs, we
study their ability to capture physical features of the Ising model at
different temperatures. This approach allows us to quantitatively assess
learned representations by comparing sample features with corresponding
theoretical predictions. Our results suggest that the considered RBMs and
convolutional VAEs are able to capture the temperature dependence of
magnetization, energy, and spin-spin correlations. The samples generated by
RBMs are more evenly distributed across temperature than those generated by
VAEs. We also find that convolutional layers in VAEs are important to model
spin correlations whereas RBMs achieve similar or even better performances
without convolutional filters.
Related papers
- Parameter Estimation of Long Memory Stochastic Processes with Deep Neural Networks [0.0]
We present a purely deep neural network-based approach for estimating long memory parameters of time series models.
Parameters, such as the Hurst exponent, are critical in characterizing the long-range dependence, roughness, and self-similarity of processes.
arXiv Detail & Related papers (2024-10-03T03:14:58Z) - The twin peaks of learning neural networks [3.382017614888546]
Recent works demonstrated the existence of a double-descent phenomenon for the generalization error of neural networks.
We explore a link between this phenomenon and the increase of complexity and sensitivity of the function represented by neural networks.
arXiv Detail & Related papers (2024-01-23T10:09:14Z) - Investigating the generative dynamics of energy-based neural networks [0.35911228556176483]
We study the generative dynamics of Restricted Boltzmann Machines (RBMs)
We show that the capacity to produce diverse data prototypes can be increased by initiating top-down sampling from chimera states.
We also found that the model is not capable of transitioning between all possible digit states within a single generation trajectory.
arXiv Detail & Related papers (2023-05-11T12:05:40Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Neural Frailty Machine: Beyond proportional hazard assumption in neural
survival regressions [30.018173329118184]
We present neural frailty machine (NFM), a powerful and flexible neural modeling framework for survival regressions.
Two concrete models are derived under the framework that extends neural proportional hazard models and non hazard regression models.
We conduct experimental evaluations over $6$ benchmark datasets of different scales, showing that the proposed NFM models outperform state-of-the-art survival models in terms of predictive performance.
arXiv Detail & Related papers (2023-03-18T08:15:15Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Prediction of liquid fuel properties using machine learning models with
Gaussian processes and probabilistic conditional generative learning [56.67751936864119]
The present work aims to construct cheap-to-compute machine learning (ML) models to act as closure equations for predicting the physical properties of alternative fuels.
Those models can be trained using the database from MD simulations and/or experimental measurements in a data-fusion-fidelity approach.
The results show that ML models can predict accurately the fuel properties of a wide range of pressure and temperature conditions.
arXiv Detail & Related papers (2021-10-18T14:43:50Z) - On Energy-Based Models with Overparametrized Shallow Neural Networks [44.74000986284978]
Energy-based models (EBMs) are a powerful framework for generative modeling.
In this work we focus on shallow neural networks.
We show that models trained in the so-called "active" regime provide a statistical advantage over their associated "lazy" or kernel regime.
arXiv Detail & Related papers (2021-04-15T15:34:58Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Parsimonious neural networks learn interpretable physical laws [77.34726150561087]
We propose parsimonious neural networks (PNNs) that combine neural networks with evolutionary optimization to find models that balance accuracy with parsimony.
The power and versatility of the approach is demonstrated by developing models for classical mechanics and to predict the melting temperature of materials from fundamental properties.
arXiv Detail & Related papers (2020-05-08T16:15:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.