Canonical partial ordering from min-cuts and quantum entanglement in random tensor networks
- URL: http://arxiv.org/abs/2506.23894v1
- Date: Mon, 30 Jun 2025 14:26:40 GMT
- Title: Canonical partial ordering from min-cuts and quantum entanglement in random tensor networks
- Authors: Miao Hu, Ion Nechita,
- Abstract summary: We show that the emphfinite correction to entanglement R'enyi entropy arising from the degeneracy of the min-cuts is given by the number of emphorder morphisms<n>We also show that the number of order morphisms corresponds to moments of a graph-dependent measure which generalizes the free Bessel law in some special cases in free probability theory.
- Score: 5.882601249202242
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The \emph{max-flow min-cut theorem} has been recently used in the theory of random tensor networks in quantum information theory, where it is helpful for computing the behavior of important physical quantities, such as the entanglement entropy. In this paper, we extend the max-flow min-cut theorem to a relation among different \emph{partial orders} on the set of vertices of a network and introduce a new partial order for the vertices based on the \emph{min-cut structure} of the network. We apply the extended max-flow min-cut theorem to random tensor networks and find that the \emph{finite correction} to the entanglement R\'enyi entropy arising from the degeneracy of the min-cuts is given by the number of \emph{order morphisms} from the min-cut partial order to the partial order induced by non-crossing partitions on the symmetric group. Moreover, we show that the number of order morphisms corresponds to moments of a graph-dependent measure which generalizes the free Bessel law in some special cases in free probability theory.
Related papers
- On the Minimax Regret of Sequential Probability Assignment via Square-Root Entropy [70.10668953625247]
We show that the minimax regret for the case of no side information can be upper bounded in terms of sequential square-root entropy.<n>For the problem of sequential probability assignment with side information, we develop both upper and lower bounds based on the aforementioned entropy.
arXiv Detail & Related papers (2025-03-22T17:26:34Z) - Genus expansion for non-linear random matrix ensembles with applications to neural networks [3.801509221714223]
We present a unified approach to studying certain non-linear random matrix ensembles and associated random neural networks.<n>We use a novel series expansion for neural networks which generalizes Fa'a di Bruno's formula to an arbitrary number of compositions.<n>As an application, we prove several results about neural networks with random weights.
arXiv Detail & Related papers (2024-07-11T12:58:07Z) - A Max-Flow approach to Random Tensor Networks [0.40964539027092906]
We study the entanglement entropy of a random tensor network (RTN) using tools from free probability theory.
One can think of random tensor networks are specific probabilistic models for tensors having some particular geometry dictated by a graph (or network) structure.
arXiv Detail & Related papers (2024-07-02T18:00:01Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - One-step replica symmetry breaking in the language of tensor networks [0.913755431537592]
We develop an exact mapping between the one-step replica symmetry breaking cavity method and tensor networks.
The two schemes come with complementary mathematical and numerical toolboxes that could be leveraged to improve the respective states of the art.
arXiv Detail & Related papers (2023-06-26T18:42:51Z) - A singular Riemannian geometry approach to Deep Neural Networks I.
Theoretical foundations [77.86290991564829]
Deep Neural Networks are widely used for solving complex problems in several scientific areas, such as speech recognition, machine translation, image analysis.
We study a particular sequence of maps between manifold, with the last manifold of the sequence equipped with a Riemannian metric.
We investigate the theoretical properties of the maps of such sequence, eventually we focus on the case of maps between implementing neural networks of practical interest.
arXiv Detail & Related papers (2021-12-17T11:43:30Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Emergent Einstein Equation in p-adic CFT Tensor Networks [6.127256542161883]
We show that a deformed Bruhat-Tits tree satisfies an emergent graph Einstein equation in a unique way.
This could provide new insights into the understanding of gravitational dynamics potentially encoded in more general tensor networks.
arXiv Detail & Related papers (2021-02-24T02:03:38Z) - Infinitely Wide Tensor Networks as Gaussian Process [1.7894377200944511]
In this paper, we show the equivalence of the infinitely wide Networks and the Gaussian Process.
We implement the Gaussian Process corresponding to the infinite limit tensor networks and plot the sample paths of these models.
arXiv Detail & Related papers (2021-01-07T02:29:15Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Directional convergence and alignment in deep learning [38.73942298289583]
We show that although the minimizers of cross-entropy and related classification losses at infinity, network weights learn by gradient flow converge in direction.
This proof holds for deep homogeneous networks allowing for ReLU, max-pooling, linear, and convolutional layers.
arXiv Detail & Related papers (2020-06-11T17:50:11Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.