Estimation of the reduced density matrix and entanglement entropies using autoregressive networks
- URL: http://arxiv.org/abs/2506.04170v1
- Date: Wed, 04 Jun 2025 17:08:19 GMT
- Title: Estimation of the reduced density matrix and entanglement entropies using autoregressive networks
- Authors: Piotr Białas, Piotr Korcyl, Tomasz Stebel, Dawid Zapolski,
- Abstract summary: We present an application of autoregressive neural networks to Monte Carlo simulations of quantum spin chains.<n>We use a hierarchy of neural networks capable of estimating conditional probabilities of consecutive spins to evaluate elements of reduced density matrices.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an application of autoregressive neural networks to Monte Carlo simulations of quantum spin chains using the correspondence with classical two-dimensional spin systems. We use a hierarchy of neural networks capable of estimating conditional probabilities of consecutive spins to evaluate elements of reduced density matrices directly. Using the Ising chain as an example, we calculate the continuum limit of the ground state's von Neumann and R\'enyi bipartite entanglement entropies of an interval built of up to 5 spins. We demonstrate that our architecture is able to estimate all the needed matrix elements with just a single training for a fixed time discretization and lattice volume. Our method can be applied to other types of spin chains, possibly with defects, as well as to estimating entanglement entropies of thermal states at non-zero temperature.
Related papers
- Leveraging recurrence in neural network wavefunctions for large-scale simulations of Heisenberg antiferromagnets: the square lattice [1.9681634372790209]
Machine-learning-based variational Monte Carlo simulations are a promising approach for targeting quantum many-body ground states.<n>We employ recurrent neural networks (RNNs) as a variational ans"atze, and leverage their recurrent nature to simulate the ground states of progressively larger systems.<n>We show that we are able to systematically improve the accuracy of the results from our simulations by increasing the training time.
arXiv Detail & Related papers (2025-02-24T13:35:23Z) - Low-Temperature Gibbs States with Tensor Networks [0.0]
We introduce a tensor network method for approximating thermal equilibrium states of quantum many-body systems at low temperatures.<n>We demonstrate our approach within a tree tensor network ansatz, although it can be extended to other tensor networks.
arXiv Detail & Related papers (2025-01-14T18:29:20Z) - Learning Generalized Statistical Mechanics with Matrix Product States [41.94295877935867]
We introduce a variational algorithm based on Matrix Product States that is trained by minimizing a generalized free energy defined using Tsallis entropy instead of the standard Gibbs entropy.
As a result, our model can generate the probability distributions associated with generalized statistical mechanics.
arXiv Detail & Related papers (2024-09-12T18:30:45Z) - Fourier Neural Operators for Learning Dynamics in Quantum Spin Systems [77.88054335119074]
We use FNOs to model the evolution of random quantum spin systems.
We apply FNOs to a compact set of Hamiltonian observables instead of the entire $2n$ quantum wavefunction.
arXiv Detail & Related papers (2024-09-05T07:18:09Z) - Genus expansion for non-linear random matrix ensembles with applications to neural networks [3.801509221714223]
We present a unified approach to studying certain non-linear random matrix ensembles and associated random neural networks.<n>We use a novel series expansion for neural networks which generalizes Fa'a di Bruno's formula to an arbitrary number of compositions.<n>As an application, we prove several results about neural networks with random weights.
arXiv Detail & Related papers (2024-07-11T12:58:07Z) - Rényi entanglement entropy of spin chain with Generative Neural Networks [0.0]
We describe a method to estimate R'enyi entanglement entropy of a spin system.
It is based on the replica trick and generative neural networks with explicit probability estimation.
We demonstrate our method on a one-dimensional quantum Ising spin chain.
arXiv Detail & Related papers (2024-06-10T11:44:54Z) - Interacting chiral fermions on the lattice with matrix product operator norms [37.69303106863453]
We develop a Hamiltonian formalism for simulating interacting chiral fermions on the lattice.
The fermion doubling problem is circumvented by constructing a Fock space endowed with a semi-definite norm.
We demonstrate that the scaling limit of the free model recovers the chiral fermion field.
arXiv Detail & Related papers (2024-05-16T17:46:12Z) - Neutron-nucleus dynamics simulations for quantum computers [49.369935809497214]
We develop a novel quantum algorithm for neutron-nucleus simulations with general potentials.
It provides acceptable bound-state energies even in the presence of noise, through the noise-resilient training method.
We introduce a new commutativity scheme called distance-grouped commutativity (DGC) and compare its performance with the well-known qubit-commutativity scheme.
arXiv Detail & Related papers (2024-02-22T16:33:48Z) - Robust Extraction of Thermal Observables from State Sampling and
Real-Time Dynamics on Quantum Computers [49.1574468325115]
We introduce a technique that imposes constraints on the density of states, most notably its non-negativity, and show that this way, we can reliably extract Boltzmann weights from noisy time series.
Our work enables the implementation of the time-series algorithm on present-day quantum computers to study finite temperature properties of many-body quantum systems.
arXiv Detail & Related papers (2023-05-30T18:00:05Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Regularized scheme of time evolution tensor network algorithms [0.0]
Regularized factorization is proposed to simulate time evolution for quantum lattice systems.
The resulting compact structure of the propagator indicates a high-order Baker-Campbell-Hausdorff series.
arXiv Detail & Related papers (2022-08-06T03:38:37Z) - Quantitative Propagation of Chaos for SGD in Wide Neural Networks [39.35545193410871]
In this paper, we investigate the limiting behavior of a continuous-time counterpart of the Gradient Descent (SGD)
We show 'propagation of chaos' for the particle system defined by this continuous-time dynamics under different scenarios.
We identify two under which different mean-field limits are obtained, one of them corresponding to an implicitly regularized version of the minimization problem at hand.
arXiv Detail & Related papers (2020-07-13T12:55:21Z) - Solving frustrated Ising models using tensor networks [0.0]
We develop a framework to study frustrated Ising models in terms of infinite tensor networks %.
We show that optimizing the choice of clusters, including the weight on shared bonds, is crucial for the contractibility of the tensor networks.
We illustrate the power of the method by computing the residual entropy of a frustrated Ising spin system on the kagome lattice with next-next-nearest neighbour interactions.
arXiv Detail & Related papers (2020-06-25T12:39:42Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.