Beyond Belief Propagation: Cluster-Corrected Tensor Network Contraction with Exponential Convergence
- URL: http://arxiv.org/abs/2510.02290v2
- Date: Sun, 26 Oct 2025 14:50:37 GMT
- Title: Beyond Belief Propagation: Cluster-Corrected Tensor Network Contraction with Exponential Convergence
- Authors: Siddhant Midha, Yifan F. Zhang,
- Abstract summary: We develop a rigorous theoretical framework for BP in tensor networks, leveraging insights from statistical mechanics.<n>We prove that the cluster expansion converges exponentially fast if an object called the emphloop contribution decays sufficiently fast with the loop size.<n>Our work opens the door to a systematic theory of BP for tensor networks and its applications in decoding classical and quantum error-correcting codes and simulating quantum systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tensor network contraction on arbitrary graphs is a fundamental computational challenge with applications ranging from quantum simulation to error correction. While belief propagation (BP) provides a powerful approximation algorithm for this task, its accuracy limitations are poorly understood and systematic improvements remain elusive. Here, we develop a rigorous theoretical framework for BP in tensor networks, leveraging insights from statistical mechanics to devise a \emph{cluster expansion} that systematically improves the BP approximation. We prove that the cluster expansion converges exponentially fast if an object called the \emph{loop contribution} decays sufficiently fast with the loop size, giving a rigorous error bound on BP. We also provide a simple and efficient algorithm to compute the cluster expansion to arbitrary order. We demonstrate the efficacy of our method on the two-dimensional Ising model, where we find that our method significantly improves upon BP and existing corrective algorithms such as loop series expansion. Our work opens the door to a systematic theory of BP for tensor networks and its applications in decoding classical and quantum error-correcting codes and simulating quantum systems.
Related papers
- Continual Quantum Architecture Search with Tensor-Train Encoding: Theory and Applications to Signal Processing [68.35481158940401]
CL-QAS is a continual quantum architecture search framework.<n>It mitigates challenges of costly encoding amplitude and forgetting in variational quantum circuits.<n>It achieves controllable robustness expressivity, sample-efficient generalization, and smooth convergence without barren plateaus.
arXiv Detail & Related papers (2026-01-10T02:36:03Z) - Tensor Network Formulation of Dequantized Algorithms for Ground State Energy Estimation [2.9436347471485558]
Dequantization algorithms play a central role in providing a clear theoretical framework to separate complexity of quantum and classical algorithms.<n>Existing dequantized algorithms typically rely on sampling procedures, leading to prohibitively large computational overheads.<n>We propose a tensor network-based dequantization framework for GSEE that eliminates the sampling process while preserving the complexity of prior dequantized algorithms.
arXiv Detail & Related papers (2025-12-15T17:07:04Z) - Partitioned Expansions for Approximate Tensor Network Contractions [0.0]
We propose a method for approximating the contraction of a tensor network by partitioning the network into a sum of cheaper networks.<n>The flexibility of our approach is demonstrated through applications to a variety of example networks.<n> Benchmark numerical results for networks composed of Ising, AKLT, and random tensors typically show an improvement in accuracy over BP by several orders of magnitude.
arXiv Detail & Related papers (2025-12-11T18:39:44Z) - Quartic quantum speedups for community detection [84.14713515477784]
We develop a quantum algorithm for hypergraph community detection that achieves a quartic quantum speedup.<n>Our algorithm is based on the Kikuchi method, which we extend beyond previously considered problems such as PCA and $p$XORSAT.
arXiv Detail & Related papers (2025-10-09T17:35:17Z) - Simulating quantum dynamics in two-dimensional lattices with tensor network influence functional belief propagation [1.4132765964347058]
We extend the applicability of the TN-IF method to two-dimensional lattices by demonstrating its construction on tree lattices and proposing a belief propagation (BP) algorithm for the TN-IF.<n>We demonstrate the power of the cluster expansion of IF-BP in quench the quantum dynamics of the 2D transverse field Ising model, obtaining numerical results that improve on the state-of-the-art.
arXiv Detail & Related papers (2025-04-10T00:04:32Z) - Belief propagation for general graphical models with loops [45.29832252085144]
We develop a unification framework that takes an arbitrary graphical model with loops.<n>We show that our framework can achieve an accuracy improvement of more than ten orders of magnitude over tensor network BP.
arXiv Detail & Related papers (2024-11-07T18:32:42Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Efficient Bound of Lipschitz Constant for Convolutional Layers by Gram
Iteration [122.51142131506639]
We introduce a precise, fast, and differentiable upper bound for the spectral norm of convolutional layers using circulant matrix theory.
We show through a comprehensive set of experiments that our approach outperforms other state-of-the-art methods in terms of precision, computational cost, and scalability.
It proves highly effective for the Lipschitz regularization of convolutional neural networks, with competitive results against concurrent approaches.
arXiv Detail & Related papers (2023-05-25T15:32:21Z) - Deep learning via message passing algorithms based on belief propagation [2.931240348160871]
We present a family of BP-based message-passing algorithms with a reinforcement field that biases towards locally entropic distributions.
These algorithms are capable of training multi-layer neural networks with discrete weights and activations with performance comparable to SGD-inspired solutions.
arXiv Detail & Related papers (2021-10-27T16:52:26Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Belief Propagation Neural Networks [103.97004780313105]
We introduce belief propagation neural networks (BPNNs)
BPNNs operate on factor graphs and generalize Belief propagation (BP)
We show that BPNNs converges 1.7x faster on Ising models while providing tighter bounds.
On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods.
arXiv Detail & Related papers (2020-07-01T07:39:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.