A Max-Flow approach to Random Tensor Networks
- URL: http://arxiv.org/abs/2407.02559v1
- Date: Tue, 2 Jul 2024 18:00:01 GMT
- Title: A Max-Flow approach to Random Tensor Networks
- Authors: Khurshed Fitter, Faedi Loulidi, Ion Nechita,
- Abstract summary: We study the entanglement entropy of a random tensor network (RTN) using tools from free probability theory.
One can think of random tensor networks are specific probabilistic models for tensors having some particular geometry dictated by a graph (or network) structure.
- Score: 0.40964539027092906
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We study the entanglement entropy of a random tensor network (RTN) using tools from free probability theory. Random tensor networks are simple toy models that help the understanding of the entanglement behavior of a boundary region in the ADS/CFT context. One can think of random tensor networks are specific probabilistic models for tensors having some particular geometry dictated by a graph (or network) structure. We first introduce our model of RTN, obtained by contracting maximally entangled states (corresponding to the edges of the graph) on the tensor product of Gaussian tensors (corresponding to the vertices of the graph). We study the entanglement spectrum of the resulting random spectrum along a given bipartition of the local Hilbert spaces. We provide the limiting eigenvalue distribution of the reduced density operator of the RTN state, in the limit of large local dimension. The limit value is described via a maximum flow optimization problem in a new graph corresponding to the geometry of the RTN and the given bipartition. In the case of series-parallel graphs, we provide an explicit formula for the limiting eigenvalue distribution using classical and free multiplicative convolutions. We discuss the physical implications of our results, allowing us to go beyond the semiclassical regime without any cut assumption, specifically in terms of finite corrections to the average entanglement entropy of the RTN.
Related papers
- Generalization of Geometric Graph Neural Networks [84.01980526069075]
We study the generalization capabilities of geometric graph neural networks (GNNs)
We prove a generalization gap between the optimal empirical risk and the optimal statistical risk of this GNN.
The most important observation is that the generalization capability can be realized with one large graph instead of being limited to the size of the graph as in previous results.
arXiv Detail & Related papers (2024-09-08T18:55:57Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - Deep neural networks with dependent weights: Gaussian Process mixture
limit, heavy tails, sparsity and compressibility [18.531464406721412]
This article studies the infinite-width limit of deep feedforward neural networks whose weights are dependent.
Each hidden node of the network is assigned a nonnegative random variable that controls the variance of the outgoing weights of that node.
arXiv Detail & Related papers (2022-05-17T09:14:32Z) - On some theoretical limitations of Generative Adversarial Networks [77.34726150561087]
It is a general assumption that GANs can generate any probability distribution.
We provide a new result based on Extreme Value Theory showing that GANs can't generate heavy tailed distributions.
arXiv Detail & Related papers (2021-10-21T06:10:38Z) - Nonperturbative renormalization for the neural network-QFT
correspondence [0.0]
We study the concepts of locality and power-counting in this context.
We provide an analysis in terms of the nonperturbative renormalization group using the Wetterich-Morris equation.
Our aim is to provide a useful formalism to investigate neural networks behavior beyond the large-width limit.
arXiv Detail & Related papers (2021-08-03T10:36:04Z) - Infinitely Wide Tensor Networks as Gaussian Process [1.7894377200944511]
In this paper, we show the equivalence of the infinitely wide Networks and the Gaussian Process.
We implement the Gaussian Process corresponding to the infinite limit tensor networks and plot the sample paths of these models.
arXiv Detail & Related papers (2021-01-07T02:29:15Z) - Random Geometric Graphs on Euclidean Balls [2.28438857884398]
We consider a latent space model for random graphs where a node $i$ is associated to a random latent point $X_i$ on the Euclidean unit ball.
For certain link functions, the model considered here generates graphs with degree distribution that have tails with a power-law-type distribution.
arXiv Detail & Related papers (2020-10-26T17:21:57Z) - Solving frustrated Ising models using tensor networks [0.0]
We develop a framework to study frustrated Ising models in terms of infinite tensor networks %.
We show that optimizing the choice of clusters, including the weight on shared bonds, is crucial for the contractibility of the tensor networks.
We illustrate the power of the method by computing the residual entropy of a frustrated Ising spin system on the kagome lattice with next-next-nearest neighbour interactions.
arXiv Detail & Related papers (2020-06-25T12:39:42Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.