Variational Neural and Tensor Network Approximations of Thermal States
- URL: http://arxiv.org/abs/2401.14243v2
- Date: Tue, 28 Jan 2025 15:58:35 GMT
- Title: Variational Neural and Tensor Network Approximations of Thermal States
- Authors: Sirui Lu, Giacomo Giudice, J. Ignacio Cirac,
- Abstract summary: We introduce a variational Monte Carlo algorithm for approximating finite-temperature quantum many-body systems.<n>We employ a variety of trial states -- both tensor networks as well as neural networks -- as variational Ans"atze for our numerical optimization.
- Score: 0.3277163122167433
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a variational Monte Carlo algorithm for approximating finite-temperature quantum many-body systems, based on the minimization of a modified free energy. This approach directly approximates the state at a fixed temperature, allowing for systematic improvement of the ansatz expressiveness without accumulating errors from iterative imaginary time evolution. We employ a variety of trial states -- both tensor networks as well as neural networks -- as variational Ans\"atze for our numerical optimization. We benchmark and compare different constructions in the above classes, both for one- and two-dimensional problems, with systems made of up to $N=100$ spins. Our results demonstrate that while restricted Boltzmann machines show limitations, string bond tensor network states exhibit systematic improvements with increasing bond dimensions and the number of strings.
Related papers
- Low-Temperature Gibbs States with Tensor Networks [0.0]
We introduce a tensor network method for approximating thermal equilibrium states of quantum many-body systems at low temperatures.
We demonstrate our approach within a tree tensor network ansatz, although it can be extended to other tensor networks.
arXiv Detail & Related papers (2025-01-14T18:29:20Z) - Optimizing Temperature Distributions for Training Neural Quantum States using Parallel Tempering [0.0]
We show that temperature optimization can significantly increase the success rate of variational algorithms.
We demonstrate this using two different neural networks, a restricted Boltzmann machine and a feedforward network.
arXiv Detail & Related papers (2024-10-30T13:48:35Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Neural Network Solutions of Bosonic Quantum Systems in One Dimension [0.0]
We benchmark the methodology by using neural networks to study several different integrable bosonic quantum systems in one dimension.
While testing the scalability of the procedure to systems with many particles, we also introduce using symmetric function inputs to the neural network to enforce exchange symmetries of indistinguishable particles.
arXiv Detail & Related papers (2023-09-05T16:08:48Z) - Absence of barren plateaus and scaling of gradients in the energy optimization of isometric tensor network states [0.0]
We consider energy problems for quantum many-body systems with extensive Hamiltonians and finite-range interactions.
We prove that variational optimization problems for matrix product states, tree tensor networks, and the multiscale entanglement renormalization ansatz are free of barren plateaus.
arXiv Detail & Related papers (2023-03-31T22:49:49Z) - Isometric tensor network representations of two-dimensional thermal
states [0.0]
We use the class of recently introduced tensor network states to represent thermal states of the transverse field Ising model.
We find that this approach offers a different way with low computational complexity to represent states at finite temperatures.
arXiv Detail & Related papers (2023-02-15T19:00:11Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Solving the nuclear pairing model with neural network quantum states [58.720142291102135]
We present a variational Monte Carlo method that solves the nuclear many-body problem in the occupation number formalism.
A memory-efficient version of the reconfiguration algorithm is developed to train the network by minimizing the expectation value of the Hamiltonian.
arXiv Detail & Related papers (2022-11-09T00:18:01Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Regularized scheme of time evolution tensor network algorithms [0.0]
Regularized factorization is proposed to simulate time evolution for quantum lattice systems.
The resulting compact structure of the propagator indicates a high-order Baker-Campbell-Hausdorff series.
arXiv Detail & Related papers (2022-08-06T03:38:37Z) - Efficient Simulation of Dynamics in Two-Dimensional Quantum Spin Systems
with Isometric Tensor Networks [0.0]
We investigate the computational power of the recently introduced class of isometric tensor network states (isoTNSs)
We discuss several technical details regarding the implementation of isoTNSs-based algorithms and compare different disentanglers.
We compute the dynamical spin structure factor of 2D quantum spin systems for two paradigmatic models.
arXiv Detail & Related papers (2021-12-15T19:00:05Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Differentiable Programming of Isometric Tensor Networks [0.0]
Differentiable programming is a new programming paradigm which enables large scale optimization through automatic calculation of gradients also known as auto-differentiation.
Here, we extend the differentiable programming to tensor networks with isometric constraints with applications to multiscale entanglement renormalization ansatz (MERA) and tensor network renormalization (TNR)
We numerically tested our methods on 1D critical quantum Ising spin chain and 2D classical Ising model.
We calculate the ground state energy for the 1D quantum model and internal energy for the classical model, and scaling dimensions of scaling operators and find they all agree
arXiv Detail & Related papers (2021-10-08T05:29:41Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - ACDC: Weight Sharing in Atom-Coefficient Decomposed Convolution [57.635467829558664]
We introduce a structural regularization across convolutional kernels in a CNN.
We show that CNNs now maintain performance with dramatic reduction in parameters and computations.
arXiv Detail & Related papers (2020-09-04T20:41:47Z) - Continuous-in-Depth Neural Networks [107.47887213490134]
We first show that ResNets fail to be meaningful dynamical in this richer sense.
We then demonstrate that neural network models can learn to represent continuous dynamical systems.
We introduce ContinuousNet as a continuous-in-depth generalization of ResNet architectures.
arXiv Detail & Related papers (2020-08-05T22:54:09Z) - Quantitative Propagation of Chaos for SGD in Wide Neural Networks [39.35545193410871]
In this paper, we investigate the limiting behavior of a continuous-time counterpart of the Gradient Descent (SGD)
We show 'propagation of chaos' for the particle system defined by this continuous-time dynamics under different scenarios.
We identify two under which different mean-field limits are obtained, one of them corresponding to an implicitly regularized version of the minimization problem at hand.
arXiv Detail & Related papers (2020-07-13T12:55:21Z) - Solving frustrated Ising models using tensor networks [0.0]
We develop a framework to study frustrated Ising models in terms of infinite tensor networks %.
We show that optimizing the choice of clusters, including the weight on shared bonds, is crucial for the contractibility of the tensor networks.
We illustrate the power of the method by computing the residual entropy of a frustrated Ising spin system on the kagome lattice with next-next-nearest neighbour interactions.
arXiv Detail & Related papers (2020-06-25T12:39:42Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.