Hyper-Invariant MERA: Approximate Holographic Error Correction Codes
with Power-Law Correlations
- URL: http://arxiv.org/abs/2103.08631v1
- Date: Mon, 15 Mar 2021 18:12:54 GMT
- Title: Hyper-Invariant MERA: Approximate Holographic Error Correction Codes
with Power-Law Correlations
- Authors: ChunJun Cao, Jason Pollack, Yixu Wang
- Abstract summary: We consider a class of holographic tensor networks that are efficiently contractible variational ansatze.
In the case when the network consists of a single type of tensor that also acts as an erasure correction code, we show that it cannot be both locally contractible and sustain power-law correlation functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider a class of holographic tensor networks that are efficiently
contractible variational ansatze, manifestly (approximate) quantum error
correction codes, and can support power-law correlation functions. In the case
when the network consists of a single type of tensor that also acts as an
erasure correction code, we show that it cannot be both locally contractible
and sustain power-law correlation functions. Motivated by this no-go theorem,
and the desirability of local contractibility for an efficient variational
ansatz, we provide guidelines for constructing networks consisting of multiple
types of tensors that can support power-law correlation. We also provide an
explicit construction of one such network, which approximates the holographic
HaPPY pentagon code in the limit where variational parameters are taken to be
small.
Related papers
- Bulk-boundary correspondence from hyper-invariant tensor networks [0.0]
We introduce a tensor network designed to faithfully simulate the AdS/CFT correspondence, akin to the multi-scale entanglement renormalization ansatz (MERA)
This framework accurately reproduces the boundary conformal field theory's (CFT) two- and three-point correlation functions, while considering the image of any bulk operator.
arXiv Detail & Related papers (2024-09-03T16:24:18Z) - Tensor Network Computations That Capture Strict Variationality, Volume Law Behavior, and the Efficient Representation of Neural Network States [0.6148049086034199]
We introduce a change of perspective on tensor network states that is defined by the computational graph of the contraction of an amplitude.
The resulting class of states, which we refer to as tensor network functions, inherit the conceptual advantages of tensor network states while removing computational restrictions arising from the need to converge approximate contractions.
We use tensor network functions to compute strict variational estimates of the energy on loopy graphs, analyze their expressive power for ground-states, show that we can capture aspects of volume law time evolution, and provide a mapping of general feed-forward neural nets onto efficient tensor network functions.
arXiv Detail & Related papers (2024-05-06T19:04:13Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Holographic Codes from Hyperinvariant Tensor Networks [70.31754291849292]
We show that a new class of exact holographic codes, extending the previously proposed hyperinvariant tensor networks into quantum codes, produce the correct boundary correlation functions.
This approach yields a dictionary between logical states in the bulk and the critical renormalization group flow of boundary states.
arXiv Detail & Related papers (2023-04-05T20:28:04Z) - Hyper-optimized approximate contraction of tensor networks with
arbitrary geometry [0.0]
We describe how to approximate tensor network contraction through bond compression on arbitrary graphs.
In particular, we introduce a hyper-optimization over the compression and contraction strategy itself to minimize error and cost.
arXiv Detail & Related papers (2022-06-14T17:59:16Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - A Convergence Theory Towards Practical Over-parameterized Deep Neural
Networks [56.084798078072396]
We take a step towards closing the gap between theory and practice by significantly improving the known theoretical bounds on both the network width and the convergence time.
We show that convergence to a global minimum is guaranteed for networks with quadratic widths in the sample size and linear in their depth at a time logarithmic in both.
Our analysis and convergence bounds are derived via the construction of a surrogate network with fixed activation patterns that can be transformed at any time to an equivalent ReLU network of a reasonable size.
arXiv Detail & Related papers (2021-01-12T00:40:45Z) - Improve Generalization and Robustness of Neural Networks via Weight
Scale Shifting Invariant Regularizations [52.493315075385325]
We show that a family of regularizers, including weight decay, is ineffective at penalizing the intrinsic norms of weights for networks with homogeneous activation functions.
We propose an improved regularizer that is invariant to weight scale shifting and thus effectively constrains the intrinsic norm of a neural network.
arXiv Detail & Related papers (2020-08-07T02:55:28Z) - Molecule Property Prediction and Classification with Graph Hypernetworks [113.38181979662288]
We show that the replacement of the underlying networks with hypernetworks leads to a boost in performance.
A major difficulty in the application of hypernetworks is their lack of stability.
A recent work has tackled the training instability of hypernetworks in the context of error correcting codes.
arXiv Detail & Related papers (2020-02-01T16:44:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.