A Practical Guide to the Numerical Implementation of Tensor Networks I:
Contractions, Decompositions and Gauge Freedom
- URL: http://arxiv.org/abs/2202.02138v1
- Date: Fri, 4 Feb 2022 14:10:09 GMT
- Title: A Practical Guide to the Numerical Implementation of Tensor Networks I:
Contractions, Decompositions and Gauge Freedom
- Authors: Glen Evenbly
- Abstract summary: We present an overview of the key ideas and skills necessary to begin implementing tensor network methods numerically.
The topics presented are of key importance to many common tensor network algorithms such as DMRG, TEBD, TRG, PEPS and MERA.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present an overview of the key ideas and skills necessary to begin
implementing tensor network methods numerically, which is intended to
facilitate the practical application of tensor network methods for researchers
that are already versed with their theoretical foundations. These skills
include an introduction to the contraction of tensor networks, to optimal
tensor decompositions, and to the manipulation of gauge degrees of freedom in
tensor networks. The topics presented are of key importance to many common
tensor network algorithms such as DMRG, TEBD, TRG, PEPS and MERA.
Related papers
- Survey on Computational Applications of Tensor Network Simulations [0.0]
Review aims to clarify which classes of relevant applications have been proposed for which class of tensor networks.
We intend this review to be a high-level tour on tensor network applications which is easy to read by non-experts.
arXiv Detail & Related papers (2024-08-09T11:46:47Z) - Tensor Network Computations That Capture Strict Variationality, Volume Law Behavior, and the Efficient Representation of Neural Network States [0.6148049086034199]
We introduce a change of perspective on tensor network states that is defined by the computational graph of the contraction of an amplitude.
The resulting class of states, which we refer to as tensor network functions, inherit the conceptual advantages of tensor network states while removing computational restrictions arising from the need to converge approximate contractions.
We use tensor network functions to compute strict variational estimates of the energy on loopy graphs, analyze their expressive power for ground-states, show that we can capture aspects of volume law time evolution, and provide a mapping of general feed-forward neural nets onto efficient tensor network functions.
arXiv Detail & Related papers (2024-05-06T19:04:13Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Robust Training and Verification of Implicit Neural Networks: A
Non-Euclidean Contractive Approach [64.23331120621118]
This paper proposes a theoretical and computational framework for training and robustness verification of implicit neural networks.
We introduce a related embedded network and show that the embedded network can be used to provide an $ell_infty$-norm box over-approximation of the reachable sets of the original network.
We apply our algorithms to train implicit neural networks on the MNIST dataset and compare the robustness of our models with the models trained via existing approaches in the literature.
arXiv Detail & Related papers (2022-08-08T03:13:24Z) - Improvements to Gradient Descent Methods for Quantum Tensor Network
Machine Learning [0.0]
We introduce a copy node' method that successfully initializes arbitrary tensor networks.
We present numerical results that show that the combination of techniques presented here produces quantum inspired tensor network models.
arXiv Detail & Related papers (2022-03-03T19:00:40Z) - Loop-Free Tensor Networks for High-Energy Physics [0.0]
tensor network methods are a powerful theoretical and numerical paradigm spawning from condensed matter physics and quantum information science.
This brief review introduces the reader to tensor network methods, a powerful theoretical and numerical paradigm spawning from condensed matter physics and quantum information science.
arXiv Detail & Related papers (2021-09-24T09:38:45Z) - Semi-supervised Network Embedding with Differentiable Deep Quantisation [81.49184987430333]
We develop d-SNEQ, a differentiable quantisation method for network embedding.
d-SNEQ incorporates a rank loss to equip the learned quantisation codes with rich high-order information.
It is able to substantially compress the size of trained embeddings, thus reducing storage footprint and accelerating retrieval speed.
arXiv Detail & Related papers (2021-08-20T11:53:05Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z) - Finite Versus Infinite Neural Networks: an Empirical Study [69.07049353209463]
kernel methods outperform fully-connected finite-width networks.
Centered and ensembled finite networks have reduced posterior variance.
Weight decay and the use of a large learning rate break the correspondence between finite and infinite networks.
arXiv Detail & Related papers (2020-07-31T01:57:47Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Understanding Generalization in Deep Learning via Tensor Methods [53.808840694241]
We advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective.
We propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks.
arXiv Detail & Related papers (2020-01-14T22:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.