Block belief propagation algorithm for two-dimensional tensor networks
- URL: http://arxiv.org/abs/2301.05844v3
- Date: Thu, 7 Sep 2023 00:25:54 GMT
- Title: Block belief propagation algorithm for two-dimensional tensor networks
- Authors: Chu Guo, Dario Poletti, Itai Arad
- Abstract summary: We propose a block belief propagation algorithm for contracting two-dimensional tensor networks and approximating the ground state of $2D$ systems.
As applications, we use our algorithm to study the $2D$ Heisenberg and transverse Ising models, and show that the accuracy of the method is on par with state-of-the-art results.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Belief propagation is a well-studied algorithm for approximating local
marginals of multivariate probability distribution over complex networks, while
tensor network states are powerful tools for quantum and classical many-body
problems. Building on a recent connection between the belief propagation
algorithm and the problem of tensor network contraction, we propose a block
belief propagation algorithm for contracting two-dimensional tensor networks
and approximating the ground state of $2D$ systems. The advantages of our
method are three-fold: 1) the same algorithm works for both finite and infinite
systems; 2) it allows natural and efficient parallelization; 3) given its
flexibility it would allow to deal with different unit cells. As applications,
we use our algorithm to study the $2D$ Heisenberg and transverse Ising models,
and show that the accuracy of the method is on par with state-of-the-art
results.
Related papers
- Limitations of tensor network approaches for optimization and sampling: A comparison against quantum and classical Ising machines [0.0]
We develop an algorithm to reveal the low-energy spectrum of Ising spin-glass systems with interaction graphs.
Our deterministic approach combines a branch-and-bound search strategy with an approximate calculation of marginals.
We benchmark our approach on random problems defined on Pegasus and Zephyr graphs with up to a few thousand spins.
arXiv Detail & Related papers (2024-11-25T14:35:14Z) - Quick design of feasible tensor networks for constrained combinatorial optimization [1.8775413720750924]
In recent years, tensor networks have been applied to constrained optimization problems for practical applications.
One approach is to construct tensor networks with nilpotent-matrix manipulation.
The proposed method is expected to facilitate the discovery of feasible tensor networks for constrained optimization problems.
arXiv Detail & Related papers (2024-09-03T08:36:23Z) - Sublinear scaling in non-Markovian open quantum systems simulations [0.0]
We introduce a numerically exact algorithm to calculate process tensors.
Our approach requires only $mathcalO(nlog n)$ singular value decompositions for environments with infinite memory.
arXiv Detail & Related papers (2023-04-11T15:40:33Z) - Generative Adversarial Learning of Sinkhorn Algorithm Initializations [0.0]
We show that meticulously training a neural network to learn initializations to the algorithm via the entropic OT dual problem can significantly speed up convergence.
We show that our network can even be used as a standalone OT solver to approximate regularized transport distances to a few percent error.
arXiv Detail & Related papers (2022-11-30T21:56:09Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Unified Field Theory for Deep and Recurrent Neural Networks [56.735884560668985]
We present a unified and systematic derivation of the mean-field theory for both recurrent and deep networks.
We find that convergence towards the mean-field theory is typically slower for recurrent networks than for deep networks.
Our method exposes that Gaussian processes are but the lowest order of a systematic expansion in $1/n$.
arXiv Detail & Related papers (2021-12-10T15:06:11Z) - Unsupervised Domain-adaptive Hash for Networks [81.49184987430333]
Domain-adaptive hash learning has enjoyed considerable success in the computer vision community.
We develop an unsupervised domain-adaptive hash learning method for networks, dubbed UDAH.
arXiv Detail & Related papers (2021-08-20T12:09:38Z) - Simulation of three-dimensional quantum systems with projected
entangled-pair states [0.0]
We develop and benchmark two contraction approaches for infinite projected entangled-pair states (iPEPS) in 3D.
The first approach is based on a contraction of a finite cluster of tensors including an effective environment to approximate the full 3D network.
The second approach performs a full contraction of the network by first iteratively contracting layers of the network with a boundary iPEPS, followed by a contraction of the resulting quasi-2D network.
arXiv Detail & Related papers (2021-02-12T19:00:03Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.