Variational Neural and Tensor Network Approximations of Thermal States
- URL: http://arxiv.org/abs/2401.14243v2
- Date: Tue, 28 Jan 2025 15:58:35 GMT
- Title: Variational Neural and Tensor Network Approximations of Thermal States
- Authors: Sirui Lu, Giacomo Giudice, J. Ignacio Cirac,
- Abstract summary: We introduce a variational Monte Carlo algorithm for approximating finite-temperature quantum many-body systems.
We employ a variety of trial states -- both tensor networks as well as neural networks -- as variational Ans"atze for our numerical optimization.
- Score: 0.3277163122167433
- License:
- Abstract: We introduce a variational Monte Carlo algorithm for approximating finite-temperature quantum many-body systems, based on the minimization of a modified free energy. This approach directly approximates the state at a fixed temperature, allowing for systematic improvement of the ansatz expressiveness without accumulating errors from iterative imaginary time evolution. We employ a variety of trial states -- both tensor networks as well as neural networks -- as variational Ans\"atze for our numerical optimization. We benchmark and compare different constructions in the above classes, both for one- and two-dimensional problems, with systems made of up to $N=100$ spins. Our results demonstrate that while restricted Boltzmann machines show limitations, string bond tensor network states exhibit systematic improvements with increasing bond dimensions and the number of strings.
Related papers
- Low-Temperature Gibbs States with Tensor Networks [0.0]
We introduce a tensor network method for approximating thermal equilibrium states of quantum many-body systems at low temperatures.
We demonstrate our approach within a tree tensor network ansatz, although it can be extended to other tensor networks.
arXiv Detail & Related papers (2025-01-14T18:29:20Z) - Optimizing Temperature Distributions for Training Neural Quantum States using Parallel Tempering [0.0]
We show that temperature optimization can significantly increase the success rate of variational algorithms.
We demonstrate this using two different neural networks, a restricted Boltzmann machine and a feedforward network.
arXiv Detail & Related papers (2024-10-30T13:48:35Z) - Isometric tensor network representations of two-dimensional thermal
states [0.0]
We use the class of recently introduced tensor network states to represent thermal states of the transverse field Ising model.
We find that this approach offers a different way with low computational complexity to represent states at finite temperatures.
arXiv Detail & Related papers (2023-02-15T19:00:11Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Solving the nuclear pairing model with neural network quantum states [58.720142291102135]
We present a variational Monte Carlo method that solves the nuclear many-body problem in the occupation number formalism.
A memory-efficient version of the reconfiguration algorithm is developed to train the network by minimizing the expectation value of the Hamiltonian.
arXiv Detail & Related papers (2022-11-09T00:18:01Z) - Regularized scheme of time evolution tensor network algorithms [0.0]
Regularized factorization is proposed to simulate time evolution for quantum lattice systems.
The resulting compact structure of the propagator indicates a high-order Baker-Campbell-Hausdorff series.
arXiv Detail & Related papers (2022-08-06T03:38:37Z) - ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution [57.635467829558664]
We introduce a structural regularization across convolutional kernels in a CNN.
We show that CNNs now maintain performance with dramatic reduction in parameters and computations.
arXiv Detail & Related papers (2020-09-04T20:41:47Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z) - Solving frustrated Ising models using tensor networks [0.0]
We develop a framework to study frustrated Ising models in terms of infinite tensor networks %.
We show that optimizing the choice of clusters, including the weight on shared bonds, is crucial for the contractibility of the tensor networks.
We illustrate the power of the method by computing the residual entropy of a frustrated Ising spin system on the kagome lattice with next-next-nearest neighbour interactions.
arXiv Detail & Related papers (2020-06-25T12:39:42Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.