Belief propagation for general graphical models with loops
- URL: http://arxiv.org/abs/2411.04957v1
- Date: Thu, 07 Nov 2024 18:32:42 GMT
- Title: Belief propagation for general graphical models with loops
- Authors: Pedro Hack, Christian B. Mendl, Alexandru Paler,
- Abstract summary: We develop a unified view to make the generalized BP proposal by Kirkley et. al explicit on arbitrary graphical models.
We derive BP schemes and provide inference equations for BP on loopy tensor networks and, more generally, loopy graphical models.
We show that the tensor network message passing approach relies essentially on the same approximation as the method by Kirkley.
- Score: 45.29832252085144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Belief Propagation (BP) decoders for quantum error correcting codes are not always precise. There is a growing interest in the application of tensor networks to quantum error correction in general and, in particular, in degenerate quantum maximum likelihood decoding and the tensor network decoder. We develop a unified view to make the generalized BP proposal by Kirkley et. al explicit on arbitrary graphical models. We derive BP schemes and provide inference equations for BP on loopy tensor networks and, more generally, loopy graphical models. In doing so we introduce a tree-equivalent approach which allows us to relate the tensor network BlockBP to a generalized BP for loopy networks. Moreover, we show that the tensor network message passing approach relies essentially on the same approximation as the method by Kirkley. This allows us to make tensor network message passing available for degenerate quantum maximum likelihood decoding. Our method and results are key to obtaining guidelines regarding how the exchange between complexity and decoding accuracy works between BP and tensor network decoders. Finally, we discuss how the tree-equivalent method and the method by Kirkley can justify why message scheduling improves the performance of BP.
Related papers
- Loop Series Expansions for Tensor Networks [0.2796197251957244]
We describe how a loop series expansion can be applied to improve the accuracy of a BP approximation to a tensor network contraction.
We benchmark this proposal for the contraction of iPEPS, either representing the ground state of an AKLT model or with randomly defined tensors.
arXiv Detail & Related papers (2024-09-04T22:22:35Z) - An almost-linear time decoding algorithm for quantum LDPC codes under circuit-level noise [0.562479170374811]
We introduce the belief propagation plus ordered Tanner forest (BP+OTF) algorithm as an almost-linear time decoder for quantum low-density parity-check codes.
We show that the BP+OTF decoder achieves logical error suppression within an order of magnitude of state-of-the-art inversion-based decoders.
arXiv Detail & Related papers (2024-09-02T19:50:57Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Tensor Network Computations That Capture Strict Variationality, Volume Law Behavior, and the Efficient Representation of Neural Network States [0.6148049086034199]
We introduce a change of perspective on tensor network states that is defined by the computational graph of the contraction of an amplitude.<n>The resulting class of states, which we refer to as tensor network functions, inherit the conceptual advantages of tensor network states while removing computational restrictions arising from the need to converge approximate contractions.<n>We use tensor network functions to compute strict variational estimates of the energy on loopy graphs, analyze their expressive power for ground-states, show that we can capture aspects of volume law time evolution, and provide a mapping of general feed-forward neural nets onto efficient tensor network functions.
arXiv Detail & Related papers (2024-05-06T19:04:13Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Dynamic Graph Message Passing Networks for Visual Recognition [112.49513303433606]
Modelling long-range dependencies is critical for scene understanding tasks in computer vision.
A fully-connected graph is beneficial for such modelling, but its computational overhead is prohibitive.
We propose a dynamic graph message passing network, that significantly reduces the computational complexity.
arXiv Detail & Related papers (2022-09-20T14:41:37Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Belief propagation for supply networks: Efficient clustering of their
factor graphs [0.0]
We consider belief propagation (BP) as an efficient tool for state estimation and optimization problems in supply networks.
We propose a systematic way to cluster loops of factor graphs such that the resulting factor graphs have no additional loops as compared to the original network.
arXiv Detail & Related papers (2022-03-01T14:01:35Z) - Quantum message-passing algorithm for optimal and efficient decoding [2.3020018305241328]
We expand the understanding, formalism, and applicability of the BPQM algorithm.
We provide the first formal description of the BPQM algorithm in full detail and without any ambiguity.
We show some promising numerical results that indicate that BPQM on factor graphs with cycles can significantly outperform the best possible classical decoder.
arXiv Detail & Related papers (2021-09-16T18:01:12Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Solving frustrated Ising models using tensor networks [0.0]
We develop a framework to study frustrated Ising models in terms of infinite tensor networks %.
We show that optimizing the choice of clusters, including the weight on shared bonds, is crucial for the contractibility of the tensor networks.
We illustrate the power of the method by computing the residual entropy of a frustrated Ising spin system on the kagome lattice with next-next-nearest neighbour interactions.
arXiv Detail & Related papers (2020-06-25T12:39:42Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z) - Fiedler Regularization: Learning Neural Networks with Graph Sparsity [6.09170287691728]
We introduce a novel regularization approach for deep learning that incorporates and respects the underlying graphical structure of the neural network.
We propose to use the Fiedler value of the neural network's underlying graph as a tool for regularization.
arXiv Detail & Related papers (2020-03-02T16:19:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.