Gauging tensor networks with belief propagation
- URL: http://arxiv.org/abs/2306.17837v4
- Date: Tue, 5 Dec 2023 19:12:28 GMT
- Title: Gauging tensor networks with belief propagation
- Authors: Joseph Tindall and Matthew T. Fishman
- Abstract summary: We introduce a new algorithm for gauging tensor networks using belief propagation.
We show that this method is closely related to known tensor network gauging methods.
We present numerical evidence and scaling arguments that this algorithm is faster than existing gauging algorithms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effectively compressing and optimizing tensor networks requires reliable
methods for fixing the latent degrees of freedom of the tensors, known as the
gauge. Here we introduce a new algorithm for gauging tensor networks using
belief propagation, a method that was originally formulated for performing
statistical inference on graphical models and has recently found applications
in tensor network algorithms. We show that this method is closely related to
known tensor network gauging methods. It has the practical advantage, however,
that existing belief propagation implementations can be repurposed for tensor
network gauging, and that belief propagation is a very simple algorithm based
on just tensor contractions so it can be easier to implement, optimize, and
generalize. We present numerical evidence and scaling arguments that this
algorithm is faster than existing gauging algorithms, demonstrating its usage
on structured, unstructured, and infinite tensor networks. Additionally, we
apply this method to improve the accuracy of the widely used simple update gate
evolution algorithm.
Related papers
- Approximate Contraction of Arbitrary Tensor Networks with a Flexible and Efficient Density Matrix Algorithm [8.329034093208826]
We introduce a method to efficiently approximate tensor network contractions using low-rank approximations.
The proposed algorithm has the flexibility to incorporate a large portion of the environment when performing low-rank approximations.
arXiv Detail & Related papers (2024-06-14T07:13:52Z) - Towards Efficient Verification of Quantized Neural Networks [9.352320240912109]
Quantization replaces floating point arithmetic with integer arithmetic in deep neural network models.
We show how efficiency can be improved by utilizing gradient-based search methods and also bound-propagation techniques.
arXiv Detail & Related papers (2023-12-20T00:43:13Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Stack operation of tensor networks [10.86105335102537]
We propose a mathematically rigorous definition for the tensor network stack approach.
We illustrate the main ideas with the matrix product states based machine learning as an example.
arXiv Detail & Related papers (2022-03-28T12:45:13Z) - Improvements to Gradient Descent Methods for Quantum Tensor Network
Machine Learning [0.0]
We introduce a copy node' method that successfully initializes arbitrary tensor networks.
We present numerical results that show that the combination of techniques presented here produces quantum inspired tensor network models.
arXiv Detail & Related papers (2022-03-03T19:00:40Z) - Training Certifiably Robust Neural Networks with Efficient Local
Lipschitz Bounds [99.23098204458336]
Certified robustness is a desirable property for deep neural networks in safety-critical applications.
We show that our method consistently outperforms state-of-the-art methods on MNIST and TinyNet datasets.
arXiv Detail & Related papers (2021-11-02T06:44:10Z) - DeepReduce: A Sparse-tensor Communication Framework for Distributed Deep
Learning [79.89085533866071]
This paper introduces DeepReduce, a versatile framework for the compressed communication of sparse tensors.
DeepReduce decomposes tensors in two sets, values and indices, and allows both independent and combined compression of these sets.
Our experiments with large real models demonstrate that DeepReduce transmits fewer data and imposes lower computational overhead than existing methods.
arXiv Detail & Related papers (2021-02-05T11:31:24Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Enabling certification of verification-agnostic networks via
memory-efficient semidefinite programming [97.40955121478716]
We propose a first-order dual SDP algorithm that requires memory only linear in the total number of network activations.
We significantly improve L-inf verified robust accuracy from 1% to 88% and 6% to 40% respectively.
We also demonstrate tight verification of a quadratic stability specification for the decoder of a variational autoencoder.
arXiv Detail & Related papers (2020-10-22T12:32:29Z) - Adaptive Learning of Tensor Network Structures [6.407946291544721]
We leverage the TN formalism to develop a generic and efficient adaptive algorithm to learn the structure and the parameters of a TN from data.
Our algorithm can adaptively identify TN structures with small number of parameters that effectively optimize any differentiable objective function.
arXiv Detail & Related papers (2020-08-12T16:41:56Z) - Distance-Based Regularisation of Deep Networks for Fine-Tuning [116.71288796019809]
We develop an algorithm that constrains a hypothesis class to a small sphere centred on the initial pre-trained weights.
Empirical evaluation shows that our algorithm works well, corroborating our theoretical results.
arXiv Detail & Related papers (2020-02-19T16:00:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.