Loop Series Expansions for Tensor Networks
- URL: http://arxiv.org/abs/2409.03108v1
- Date: Wed, 4 Sep 2024 22:22:35 GMT
- Title: Loop Series Expansions for Tensor Networks
- Authors: Glen Evenbly, Nicola Pancotti, Ashley Milsted, Johnnie Gray, Garnet Kin-Lic Chan,
- Abstract summary: We describe how a loop series expansion can be applied to improve the accuracy of a BP approximation to a tensor network contraction.
We benchmark this proposal for the contraction of iPEPS, either representing the ground state of an AKLT model or with randomly defined tensors.
- Score: 0.2796197251957244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Belief propagation (BP) can be a useful tool to approximately contract a tensor network, provided that the contributions from any closed loops in the network are sufficiently weak. In this manuscript we describe how a loop series expansion can be applied to systematically improve the accuracy of a BP approximation to a tensor network contraction, in principle converging arbitrarily close to the exact result. More generally, our result provides a framework for expanding a tensor network as a sum of component networks in a hierarchy of increasing complexity. We benchmark this proposal for the contraction of iPEPS, either representing the ground state of an AKLT model or with randomly defined tensors, where it is shown to improve in accuracy over standard BP by several orders of magnitude whilst incurring only a minor increase in computational cost. These results indicate that the proposed series expansions could be a useful tool to accurately evaluate tensor networks in cases that otherwise exceed the limits of established contraction routines.
Related papers
- Partitioned Expansions for Approximate Tensor Network Contractions [0.0]
We propose a method for approximating the contraction of a tensor network by partitioning the network into a sum of cheaper networks.<n>The flexibility of our approach is demonstrated through applications to a variety of example networks.<n> Benchmark numerical results for networks composed of Ising, AKLT, and random tensors typically show an improvement in accuracy over BP by several orders of magnitude.
arXiv Detail & Related papers (2025-12-11T18:39:44Z) - Beyond Belief Propagation: Cluster-Corrected Tensor Network Contraction with Exponential Convergence [0.0]
We develop a rigorous theoretical framework for BP in tensor networks, leveraging insights from statistical mechanics.<n>We prove that the cluster expansion converges exponentially fast if an object called the emphloop contribution decays sufficiently fast with the loop size.<n>Our work opens the door to a systematic theory of BP for tensor networks and its applications in decoding classical and quantum error-correcting codes and simulating quantum systems.
arXiv Detail & Related papers (2025-10-02T17:58:30Z) - Quantization vs Pruning: Insights from the Strong Lottery Ticket Hypothesis [5.494111035517599]
Quantization is an essential technique for making neural networks more efficient, yet our theoretical understanding of it remains limited.<n>Previous works demonstrated that extremely low-precision networks, such as binary networks, can be constructed by pruning large, randomly- approximationd networks.<n>We build on foundational results by Borgs et al. on the Number Partitioning Problem to derive new theoretical results for the Random Subset Sum Problem in a quantized setting.
arXiv Detail & Related papers (2025-08-14T18:51:34Z) - Belief propagation for general graphical models with loops [45.29832252085144]
We develop a unified view to make the generalized BP proposal by Kirkley et. al explicit on arbitrary graphical models.
We derive BP schemes and provide inference equations for BP on loopy tensor networks and, more generally, loopy graphical models.
We show that the tensor network message passing approach relies essentially on the same approximation as the method by Kirkley.
arXiv Detail & Related papers (2024-11-07T18:32:42Z) - Covering Numbers for Deep ReLU Networks with Applications to Function Approximation and Nonparametric Regression [4.297070083645049]
We develop tight (up to a multiplicative constant) lower and upper bounds on the covering numbers of fully-connected networks.
Thanks to the tightness of the bounds, a fundamental understanding of the impact of sparsity, quantization, bounded vs. unbounded weights, and network output truncation can be developed.
arXiv Detail & Related papers (2024-10-08T21:23:14Z) - Wide Deep Neural Networks with Gaussian Weights are Very Close to
Gaussian Processes [1.0878040851638]
We show that the distance between the network output and the corresponding Gaussian approximation scales inversely with the width of the network, exhibiting faster convergence than the naive suggested by the central limit theorem.
We also apply our bounds to obtain theoretical approximations for the exact posterior distribution of the network, when the likelihood is a bounded Lipschitz function of the network output evaluated on a (finite) training set.
arXiv Detail & Related papers (2023-12-18T22:29:40Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z) - Understanding Generalization in Deep Learning via Tensor Methods [53.808840694241]
We advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective.
We propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks.
arXiv Detail & Related papers (2020-01-14T22:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.