Partitioned Expansions for Approximate Tensor Network Contractions
- URL: http://arxiv.org/abs/2512.10910v1
- Date: Thu, 11 Dec 2025 18:39:44 GMT
- Title: Partitioned Expansions for Approximate Tensor Network Contractions
- Authors: Glen Evenbly, Johnnie Gray, Garnet Kin-Lic Chan,
- Abstract summary: We propose a method for approximating the contraction of a tensor network by partitioning the network into a sum of cheaper networks.<n>The flexibility of our approach is demonstrated through applications to a variety of example networks.<n> Benchmark numerical results for networks composed of Ising, AKLT, and random tensors typically show an improvement in accuracy over BP by several orders of magnitude.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method for approximating the contraction of a tensor network by partitioning the network into a sum of computationally cheaper networks. This method, which we call a partitioned network expansion (PNE), builds upon recent work that systematically improves belief propagation (BP) approximations using loop corrections. However, in contrast to previous approaches, our expansion does not require a known BP fixed point to be implemented and can still yield accurate results even in cases where BP fails entirely. The flexibility of our approach is demonstrated through applications to a variety of example networks, including finite 2D and 3D networks, infinite networks, networks with open indices, and networks with degenerate BP fixed points. Benchmark numerical results for networks composed of Ising, AKLT, and random tensors typically show an improvement in accuracy over BP by several orders of magnitude (when BP solutions are obtainable) and also demonstrate improved performance over traditional network approximations based on singular value decomposition (SVD) for certain tasks.
Related papers
- Beyond Belief Propagation: Cluster-Corrected Tensor Network Contraction with Exponential Convergence [0.0]
We develop a rigorous theoretical framework for BP in tensor networks, leveraging insights from statistical mechanics.<n>We prove that the cluster expansion converges exponentially fast if an object called the emphloop contribution decays sufficiently fast with the loop size.<n>Our work opens the door to a systematic theory of BP for tensor networks and its applications in decoding classical and quantum error-correcting codes and simulating quantum systems.
arXiv Detail & Related papers (2025-10-02T17:58:30Z) - Belief propagation for general graphical models with loops [45.29832252085144]
We develop a unification framework that takes an arbitrary graphical model with loops.<n>We show that our framework can achieve an accuracy improvement of more than ten orders of magnitude over tensor network BP.
arXiv Detail & Related papers (2024-11-07T18:32:42Z) - Loop Series Expansions for Tensor Networks [0.2796197251957244]
We describe how a loop series expansion can be applied to improve the accuracy of a BP approximation to a tensor network contraction.<n>We benchmark this proposal for the contraction of iPEPS, either representing the ground state of an AKLT model or with randomly defined tensors.
arXiv Detail & Related papers (2024-09-04T22:22:35Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Binarizing Sparse Convolutional Networks for Efficient Point Cloud
Analysis [93.55896765176414]
We propose binary sparse convolutional networks called BSC-Net for efficient point cloud analysis.
We employ the differentiable search strategies to discover the optimal opsitions for active site matching in the shifted sparse convolution.
Our BSC-Net achieves significant improvement upon our srtong baseline and outperforms the state-of-the-art network binarization methods.
arXiv Detail & Related papers (2023-03-27T13:47:06Z) - Belief propagation for supply networks: Efficient clustering of their
factor graphs [0.0]
We consider belief propagation (BP) as an efficient tool for state estimation and optimization problems in supply networks.
We propose a systematic way to cluster loops of factor graphs such that the resulting factor graphs have no additional loops as compared to the original network.
arXiv Detail & Related papers (2022-03-01T14:01:35Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.