Belief propagation for general graphical models with loops
- URL: http://arxiv.org/abs/2411.04957v1
- Date: Thu, 07 Nov 2024 18:32:42 GMT
- Title: Belief propagation for general graphical models with loops
- Authors: Pedro Hack, Christian B. Mendl, Alexandru Paler,
- Abstract summary: We develop a unified view to make the generalized BP proposal by Kirkley et. al explicit on arbitrary graphical models.
We derive BP schemes and provide inference equations for BP on loopy tensor networks and, more generally, loopy graphical models.
We show that the tensor network message passing approach relies essentially on the same approximation as the method by Kirkley.
- Score: 45.29832252085144
- License:
- Abstract: Belief Propagation (BP) decoders for quantum error correcting codes are not always precise. There is a growing interest in the application of tensor networks to quantum error correction in general and, in particular, in degenerate quantum maximum likelihood decoding and the tensor network decoder. We develop a unified view to make the generalized BP proposal by Kirkley et. al explicit on arbitrary graphical models. We derive BP schemes and provide inference equations for BP on loopy tensor networks and, more generally, loopy graphical models. In doing so we introduce a tree-equivalent approach which allows us to relate the tensor network BlockBP to a generalized BP for loopy networks. Moreover, we show that the tensor network message passing approach relies essentially on the same approximation as the method by Kirkley. This allows us to make tensor network message passing available for degenerate quantum maximum likelihood decoding. Our method and results are key to obtaining guidelines regarding how the exchange between complexity and decoding accuracy works between BP and tensor network decoders. Finally, we discuss how the tree-equivalent method and the method by Kirkley can justify why message scheduling improves the performance of BP.
Related papers
- Loop Series Expansions for Tensor Networks [0.2796197251957244]
We describe how a loop series expansion can be applied to improve the accuracy of a BP approximation to a tensor network contraction.
We benchmark this proposal for the contraction of iPEPS, either representing the ground state of an AKLT model or with randomly defined tensors.
arXiv Detail & Related papers (2024-09-04T22:22:35Z) - An almost-linear time decoding algorithm for quantum LDPC codes under circuit-level noise [0.562479170374811]
We introduce the belief propagation plus ordered Tanner forest (BP+OTF) algorithm as an almost-linear time decoder for quantum low-density parity-check codes.
We show that the BP+OTF decoder achieves logical error suppression within an order of magnitude of state-of-the-art inversion-based decoders.
arXiv Detail & Related papers (2024-09-02T19:50:57Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Graph Neural Networks for Enhanced Decoding of Quantum LDPC Codes [6.175503577352742]
We propose a differentiable iterative decoder for quantum low-density parity-check (LDPC) codes.
The proposed algorithm is composed of classical belief propagation (BP) decoding stages and intermediate graph neural network (GNN) layers.
arXiv Detail & Related papers (2023-10-26T19:56:25Z) - A Scalable Graph Neural Network Decoder for Short Block Codes [49.25571364253986]
We propose a novel decoding algorithm for short block codes based on an edge-weighted graph neural network (EW-GNN)
The EW-GNN decoder operates on the Tanner graph with an iterative message-passing structure.
We show that the EW-GNN decoder outperforms the BP and deep-learning-based BP methods in terms of the decoding error rate.
arXiv Detail & Related papers (2022-11-13T17:13:12Z) - Graph Neural Networks for Channel Decoding [71.15576353630667]
We showcase competitive decoding performance for various coding schemes, such as low-density parity-check (LDPC) and BCH codes.
The idea is to let a neural network (NN) learn a generalized message passing algorithm over a given graph.
We benchmark our proposed decoder against state-of-the-art in conventional channel decoding as well as against recent deep learning-based results.
arXiv Detail & Related papers (2022-07-29T15:29:18Z) - Belief propagation for supply networks: Efficient clustering of their
factor graphs [0.0]
We consider belief propagation (BP) as an efficient tool for state estimation and optimization problems in supply networks.
We propose a systematic way to cluster loops of factor graphs such that the resulting factor graphs have no additional loops as compared to the original network.
arXiv Detail & Related papers (2022-03-01T14:01:35Z) - Quantum message-passing algorithm for optimal and efficient decoding [2.3020018305241328]
We expand the understanding, formalism, and applicability of the BPQM algorithm.
We provide the first formal description of the BPQM algorithm in full detail and without any ambiguity.
We show some promising numerical results that indicate that BPQM on factor graphs with cycles can significantly outperform the best possible classical decoder.
arXiv Detail & Related papers (2021-09-16T18:01:12Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.