Tensor Networks contraction and the Belief Propagation algorithm
- URL: http://arxiv.org/abs/2008.04433v2
- Date: Mon, 24 Aug 2020 13:15:20 GMT
- Title: Tensor Networks contraction and the Belief Propagation algorithm
- Authors: Roy Alkabetz, Itai Arad
- Abstract summary: Belief propagation is a well-studied message-passing algorithm that runs over graphical models.
We show how it can be adapted to the world of PEPS tensor networks and used as an approximate contraction scheme.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Belief Propagation is a well-studied message-passing algorithm that runs over
graphical models and can be used for approximate inference and approximation of
local marginals. The resulting approximations are equivalent to the
Bethe-Peierls approximation of statistical mechanics. Here we show how this
algorithm can be adapted to the world of PEPS tensor networks and used as an
approximate contraction scheme. We further show that the resultant
approximation is equivalent to the ``mean field'' approximation that is used in
the Simple-Update algorithm, thereby showing that the latter is a essentially
the Bethe-Peierls approximation. This shows that one of the simplest
approximate contraction algorithms for tensor networks is equivalent to one of
the simplest schemes for approximating marginals in graphical models in
general, and paves the way for using improvements of BP as tensor networks
algorithms.
Related papers
- Approximate Contraction of Arbitrary Tensor Networks with a Flexible and Efficient Density Matrix Algorithm [8.329034093208826]
We introduce a method to efficiently approximate tensor network contractions using low-rank approximations.
The proposed algorithm has the flexibility to incorporate a large portion of the environment when performing low-rank approximations.
arXiv Detail & Related papers (2024-06-14T07:13:52Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - A Bayesian Approach To Graph Partitioning [0.0]
Bayesian inference for learning local graph conductance based on Gaussian Process(GP)
New algorithm uses advanced MCMC convergence ideas to create a scalable and fast algorithm for convergence to stationary distribution.
arXiv Detail & Related papers (2022-04-24T05:20:19Z) - Deep learning via message passing algorithms based on belief propagation [2.931240348160871]
We present a family of BP-based message-passing algorithms with a reinforcement field that biases towards locally entropic distributions.
These algorithms are capable of training multi-layer neural networks with discrete weights and activations with performance comparable to SGD-inspired solutions.
arXiv Detail & Related papers (2021-10-27T16:52:26Z) - Effective Streaming Low-tubal-rank Tensor Approximation via Frequent
Directions [9.43704219585568]
This paper extends a popular matrix sketching technique, namely Frequent Directions, for constructing an efficient and accurate low-tubal-rank tensor approximation.
Specifically, the new algorithm allows the tensor data to be observed slice by slice, but only needs to maintain and incrementally update a much smaller sketch.
The rigorous theoretical analysis shows that the approximation error of the new algorithm can be arbitrarily small when the sketch size grows linearly.
arXiv Detail & Related papers (2021-08-23T12:53:44Z) - AIN: Fast and Accurate Sequence Labeling with Approximate Inference
Network [75.44925576268052]
The linear-chain Conditional Random Field (CRF) model is one of the most widely-used neural sequence labeling approaches.
Exact probabilistic inference algorithms are typically applied in training and prediction stages of the CRF model.
We propose to employ a parallelizable approximate variational inference algorithm for the CRF model.
arXiv Detail & Related papers (2020-09-17T12:18:43Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - Mean-Field Approximation to Gaussian-Softmax Integral with Application
to Uncertainty Estimation [23.38076756988258]
We propose a new single-model based approach to quantify uncertainty in deep neural networks.
We use a mean-field approximation formula to compute an analytically intractable integral.
Empirically, the proposed approach performs competitively when compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-06-13T07:32:38Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.