Computing critical exponents in 3D Ising model via pattern recognition/deep learning approach
- URL: http://arxiv.org/abs/2411.02604v1
- Date: Mon, 04 Nov 2024 20:57:24 GMT
- Title: Computing critical exponents in 3D Ising model via pattern recognition/deep learning approach
- Authors: Timothy A. Burt,
- Abstract summary: We perform a supervised Deep Learning approach to train a neural network on specific conformations of spin states.
We achieve a train/test accuracy of 0.92 and 0.6875, respectively.
More work remains to be done to quantify the feasibility of computing critical exponents from this approach.
- Score: 1.6317061277457001
- License:
- Abstract: In this study, we computed three critical exponents ($\alpha, \beta, \gamma$) for the 3D Ising model with Metropolis Algorithm using Finite-Size Scaling Analysis on six cube length scales (L=20,30,40,60,80,90), and performed a supervised Deep Learning (DL) approach (3D Convolutional Neural Network or CNN) to train a neural network on specific conformations of spin states. We find one can effectively reduce the information in thermodynamic ensemble-averaged quantities vs. reduced temperature t (magnetization per spin $<m>(t)$, specific heat per spin $<c>(t)$, magnetic susceptibility per spin $<\chi>(t)$) to \textit{six} latent classes. We also demonstrate our CNN on a subset of L=20 conformations and achieve a train/test accuracy of 0.92 and 0.6875, respectively. However, more work remains to be done to quantify the feasibility of computing critical exponents from the output class labels (binned $m, c, \chi$) from this approach and interpreting the results from DL models trained on systems in Condensed Matter Physics in general.
Related papers
- Bayesian Inference with Deep Weakly Nonlinear Networks [57.95116787699412]
We show at a physics level of rigor that Bayesian inference with a fully connected neural network is solvable.
We provide techniques to compute the model evidence and posterior to arbitrary order in $1/N$ and at arbitrary temperature.
arXiv Detail & Related papers (2024-05-26T17:08:04Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Effective Minkowski Dimension of Deep Nonparametric Regression: Function
Approximation and Statistical Theories [70.90012822736988]
Existing theories on deep nonparametric regression have shown that when the input data lie on a low-dimensional manifold, deep neural networks can adapt to intrinsic data structures.
This paper introduces a relaxed assumption that input data are concentrated around a subset of $mathbbRd$ denoted by $mathcalS$, and the intrinsic dimension $mathcalS$ can be characterized by a new complexity notation -- effective Minkowski dimension.
arXiv Detail & Related papers (2023-06-26T17:13:31Z) - 3D Molecular Geometry Analysis with 2D Graphs [79.47097907673877]
Ground-state 3D geometries of molecules are essential for many molecular analysis tasks.
Modern quantum mechanical methods can compute accurate 3D geometries but are computationally prohibitive.
We propose a novel deep learning framework to predict 3D geometries from molecular graphs.
arXiv Detail & Related papers (2023-05-01T19:00:46Z) - Locality defeats the curse of dimensionality in convolutional
teacher-student scenarios [69.2027612631023]
We show that locality is key in determining the learning curve exponent $beta$.
We conclude by proving, using a natural assumption, that performing kernel regression with a ridge that decreases with the size of the training set leads to similar learning curve exponents to those we obtain in the ridgeless case.
arXiv Detail & Related papers (2021-06-16T08:27:31Z) - Fundamental tradeoffs between memorization and robustness in random
features and neural tangent regimes [15.76663241036412]
We prove for a large class of activation functions that, if the model memorizes even a fraction of the training, then its Sobolev-seminorm is lower-bounded.
Experiments reveal for the first time, (iv) a multiple-descent phenomenon in the robustness of the min-norm interpolator.
arXiv Detail & Related papers (2021-06-04T17:52:50Z) - Entanglement scaling for $\lambda\phi_2^4$ [0.0]
We show that the order parameter $phi$, the correlation length $xi$ and quantities like $phi3$ and the entanglement entropy exhibit useful double scaling properties.
We find the value $alpha_c=11.09698(31)$ for the critical point, improving on previous results.
arXiv Detail & Related papers (2021-04-21T14:43:12Z) - Deep Polynomial Neural Networks [77.70761658507507]
$Pi$Nets are a new class of function approximators based on expansions.
$Pi$Nets produce state-the-art results in three challenging tasks, i.e. image generation, face verification and 3D mesh representation learning.
arXiv Detail & Related papers (2020-06-20T16:23:32Z) - Probabilistic orientation estimation with matrix Fisher distributions [0.0]
This paper focuses on estimating probability distributions over the set of 3D rotations using deep neural networks.
Learning to regress models to the set of rotations is inherently difficult due to differences in topology.
We overcome this issue by using a neural network to output the parameters for a matrix Fisher distribution.
arXiv Detail & Related papers (2020-06-17T09:28:19Z) - A Neural Scaling Law from the Dimension of the Data Manifold [8.656787568717252]
When data is plentiful, the loss achieved by well-trained neural networks scales as a power-law $L propto N-alpha$ in the number of network parameters $N$.
The scaling law can be explained if neural models are effectively just performing regression on a data manifold of intrinsic dimension $d$.
This simple theory predicts that the scaling exponents $alpha approx 4/d$ for cross-entropy and mean-squared error losses.
arXiv Detail & Related papers (2020-04-22T19:16:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.