Optimization at the boundary of the tensor network variety
- URL: http://arxiv.org/abs/2006.16963v2
- Date: Tue, 25 May 2021 09:29:43 GMT
- Title: Optimization at the boundary of the tensor network variety
- Authors: Matthias Christandl, Fulvio Gesmundo, Daniel Stilck Franca, Albert H.
Werner
- Abstract summary: tensor network states form a variational ansatz class widely used in the study of quantum many-body systems.
Recent work has shown that states on the boundary of this variety can yield more efficient representations for states of physical interest.
We show how to optimize over this class in order to find ground states of local Hamiltonians.
- Score: 2.1839191255085995
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor network states form a variational ansatz class widely used, both
analytically and numerically, in the study of quantum many-body systems. It is
known that if the underlying graph contains a cycle, e.g. as in projected
entangled pair states (PEPS), then the set of tensor network states of given
bond dimension is not closed. Its closure is the tensor network variety. Recent
work has shown that states on the boundary of this variety can yield more
efficient representations for states of physical interest, but it remained
unclear how to systematically find and optimize over such representations. We
address this issue by defining a new ansatz class of states that includes
states at the boundary of the tensor network variety of given bond dimension.
We show how to optimize over this class in order to find ground states of local
Hamiltonians by only slightly modifying standard algorithms and code for tensor
networks. We apply this new method to a different of models and observe
favorable energies and runtimes when compared with standard tensor network
methods.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Tensor Network Representation and Entanglement Spreading in Many-Body
Localized Systems: A Novel Approach [0.0]
A novel method has been devised to compute the Local Integrals of Motion for a one-dimensional many-body localized system.
A class of optimal unitary transformations is deduced in a tensor-network formalism to diagonalize the Hamiltonian of the specified system.
The efficiency of the method was assessed and found to be both fast and almost accurate.
arXiv Detail & Related papers (2023-12-13T14:28:45Z) - One-step replica symmetry breaking in the language of tensor networks [0.913755431537592]
We develop an exact mapping between the one-step replica symmetry breaking cavity method and tensor networks.
The two schemes come with complementary mathematical and numerical toolboxes that could be leveraged to improve the respective states of the art.
arXiv Detail & Related papers (2023-06-26T18:42:51Z) - Holographic Codes from Hyperinvariant Tensor Networks [70.31754291849292]
We show that a new class of exact holographic codes, extending the previously proposed hyperinvariant tensor networks into quantum codes, produce the correct boundary correlation functions.
This approach yields a dictionary between logical states in the bulk and the critical renormalization group flow of boundary states.
arXiv Detail & Related papers (2023-04-05T20:28:04Z) - On the closedness and geometry of tensor network state sets [5.989041429080286]
Network states (TNS) are a powerful approach for the study of strongly correlated quantum matter.
In practical algorithms, functionals like energy expectation values or overlaps are optimized over certain sets of TNS.
We show that sets of matrix product states (MPS) with open boundary conditions, tree tensor network states (TTNS), and the multiscale entanglement renormalization ansatz (MERA) are always closed.
arXiv Detail & Related papers (2021-07-30T18:09:28Z) - Dimension of Tensor Network varieties [68.8204255655161]
We determine an upper bound on the dimension of the tensor network variety.
A refined upper bound is given in cases relevant for applications such as varieties of matrix product states and projected entangled pairs states.
arXiv Detail & Related papers (2021-01-08T18:24:50Z) - Dynamic Graph: Learning Instance-aware Connectivity for Neural Networks [78.65792427542672]
Dynamic Graph Network (DG-Net) is a complete directed acyclic graph, where the nodes represent convolutional blocks and the edges represent connection paths.
Instead of using the same path of the network, DG-Net aggregates features dynamically in each node, which allows the network to have more representation ability.
arXiv Detail & Related papers (2020-10-02T16:50:26Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Entanglement and Tensor Networks for Supervised Image Classification [0.0]
We revisit the use of tensor networks for supervised image classification using the MNIST data set of digits of handwritten.
We propose a plausible candidate state $|Sigma_ellrangle$ and investigate its entanglement properties.
We conclude that $|Sigma_ellrangle$ is so robustly entangled that it cannot be approximated by the tensor network used in that work.
arXiv Detail & Related papers (2020-07-12T20:09:26Z) - Approximation with Tensor Networks. Part I: Approximation Spaces [0.0]
We study the approximation of functions by tensor networks (TNs)
We show that Lebesgue $Lp$-spaces in one dimension can be identified with tensor product spaces of arbitrary order through tensorization.
We show that functions in these approximation classes do not possess any Besov smoothness.
arXiv Detail & Related papers (2020-06-30T21:32:59Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.