Rényi entanglement entropy of spin chain with Generative Neural Networks
- URL: http://arxiv.org/abs/2406.06193v1
- Date: Mon, 10 Jun 2024 11:44:54 GMT
- Title: Rényi entanglement entropy of spin chain with Generative Neural Networks
- Authors: Piotr Białas, Piotr Korcyl, Tomasz Stebel, Dawid Zapolski,
- Abstract summary: We describe a method to estimate R'enyi entanglement entropy of a spin system.
It is based on the replica trick and generative neural networks with explicit probability estimation.
We demonstrate our method on a one-dimensional quantum Ising spin chain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe a method to estimate R\'enyi entanglement entropy of a spin system, which is based on the replica trick and generative neural networks with explicit probability estimation. It can be extended to any spin system or lattice field theory. We demonstrate our method on a one-dimensional quantum Ising spin chain. As the generative model, we use a hierarchy of autoregressive networks, allowing us to simulate up to 32 spins. We calculate the second R\'enyi entropy and its derivative and cross-check our results with the numerical evaluation of entropy and results available in the literature.
Related papers
- Hierarchical autoregressive neural networks in three-dimensional statistical system [0.0]
Autoregressive Neural Networks (ANN) have been recently proposed as a mechanism to improve the efficiency of Monte Carlo algorithms for several spin systems.
In this paper, we describe a generalization of the hierarchical algorithm to three spatial dimensions and study its performance on the example of the Ising model.
arXiv Detail & Related papers (2025-03-11T16:51:01Z) - von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - General Graph Random Features [42.75616308187867]
We propose a novel random walk-based algorithm for unbiased estimation of arbitrary functions of a weighted adjacency matrix.
Our algorithm enjoys subquadratic time complexity with respect to the number of nodes, overcoming the notoriously prohibitive cubic scaling of exact graph kernel evaluation.
arXiv Detail & Related papers (2023-10-07T15:47:31Z) - The exact evaluation of hexagonal spin-networks and topological quantum
neural networks [0.5919433278490629]
We introduce an algorithm for the evaluation of the physical scalar product between spin-networks.
We investigate the behavior of the evaluations on certain classes of spin-networks with the classical and quantum recoupling.
arXiv Detail & Related papers (2023-10-05T16:06:21Z) - Mutual information of spin systems from autoregressive neural networks [0.018416014644193065]
We describe a new direct method to estimate bipartite mutual information of a classical spin system based on Monte Carlo sampling.
We demonstrate it on the Ising model for four partitionings, including a multiply-connected even-odd division.
arXiv Detail & Related papers (2023-04-26T09:51:55Z) - The R-mAtrIx Net [0.0]
We provide a novel Neural Network architecture that can output R-matrix for a given quantum integrable spin chain.
We also explore the space of Hamiltonians around already learned models and reconstruct the family of integrable spin chains which they belong to.
arXiv Detail & Related papers (2023-04-14T16:50:42Z) - Gradient Descent in Neural Networks as Sequential Learning in RKBS [63.011641517977644]
We construct an exact power-series representation of the neural network in a finite neighborhood of the initial weights.
We prove that, regardless of width, the training sequence produced by gradient descent can be exactly replicated by regularized sequential learning.
arXiv Detail & Related papers (2023-02-01T03:18:07Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Machine Learning S-Wave Scattering Phase Shifts Bypassing the Radial
Schr\"odinger Equation [77.34726150561087]
We present a proof of concept machine learning model resting on a convolutional neural network capable to yield accurate scattering s-wave phase shifts.
We discuss how the Hamiltonian can serve as a guiding principle in the construction of a physically-motivated descriptor.
arXiv Detail & Related papers (2021-06-25T17:25:38Z) - Large-width functional asymptotics for deep Gaussian neural networks [2.7561479348365734]
We consider fully connected feed-forward deep neural networks where weights and biases are independent and identically distributed according to Gaussian distributions.
Our results contribute to recent theoretical studies on the interplay between infinitely wide deep neural networks and processes.
arXiv Detail & Related papers (2021-02-20T10:14:37Z) - Calculating Renyi Entropies with Neural Autoregressive Quantum States [0.0]
Entanglement entropy is essential metric for characterizing quantum many-body systems.
We estimate Renyi entropies of autoregressive neural quantum states with up to N=256 spins using quantum Monte Carlo methods.
arXiv Detail & Related papers (2020-03-03T06:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.