Single-layer tensor network approach for three-dimensional quantum systems
- URL: http://arxiv.org/abs/2405.01489v2
- Date: Tue, 20 Aug 2024 14:42:39 GMT
- Title: Single-layer tensor network approach for three-dimensional quantum systems
- Authors: Illia Lukin, Andrii Sotnikov,
- Abstract summary: We utilize the multi-layer structure of these tensor networks to simplify the contraction.
We benchmark our results on the cubic lattice Heisenberg model, reaching the bond dimension D = 7.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Calculation of observables with three-dimensional projected entangled pair states is generally hard, as it requires a contraction of complex multi-layer tensor networks. We utilize the multi-layer structure of these tensor networks to largely simplify the contraction. The proposed approach involves the usage of the layer structure both to simplify the search for the boundary projected entangled pair states and the single-layer mapping of the final corner transfer matrix renormalization group contraction. We benchmark our results on the cubic lattice Heisenberg model, reaching the bond dimension D = 7, and find a good agreement with the previous results.
Related papers
- Data Topology-Dependent Upper Bounds of Neural Network Widths [52.58441144171022]
We first show that a three-layer neural network can be designed to approximate an indicator function over a compact set.
This is then extended to a simplicial complex, deriving width upper bounds based on its topological structure.
We prove the universal approximation property of three-layer ReLU networks using our topological approach.
arXiv Detail & Related papers (2023-05-25T14:17:15Z) - Efficient calculation of three-dimensional tensor networks [5.652290685410878]
We have proposed an efficient algorithm to calculate physical quantities in the translational invariant three-dimensional tensor networks.
For the three-dimensional Ising model, the calculated internal energy and spontaneous magnetization agree with the published results in the literature.
arXiv Detail & Related papers (2022-10-18T14:40:09Z) - The Sample Complexity of One-Hidden-Layer Neural Networks [57.6421258363243]
We study a class of scalar-valued one-hidden-layer networks, and inputs bounded in Euclidean norm.
We prove that controlling the spectral norm of the hidden layer weight matrix is insufficient to get uniform convergence guarantees.
We analyze two important settings where a mere spectral norm control turns out to be sufficient.
arXiv Detail & Related papers (2022-02-13T07:12:02Z) - On the Power of Gradual Network Alignment Using Dual-Perception
Similarities [14.779474659172923]
Network alignment (NA) is the task of finding the correspondence of nodes between two networks based on the network structure and node attributes.
Our study is motivated by the fact that, since most of existing NA methods have attempted to discover all node pairs at once, they do not harness information enriched through interim discovery of node correspondences.
We propose Grad-Align, a new NA method that gradually discovers node pairs by making full use of node pairs exhibiting strong consistency.
arXiv Detail & Related papers (2022-01-26T14:01:32Z) - Simulation of three-dimensional quantum systems with projected
entangled-pair states [0.0]
We develop and benchmark two contraction approaches for infinite projected entangled-pair states (iPEPS) in 3D.
The first approach is based on a contraction of a finite cluster of tensors including an effective environment to approximate the full 3D network.
The second approach performs a full contraction of the network by first iteratively contracting layers of the network with a boundary iPEPS, followed by a contraction of the resulting quasi-2D network.
arXiv Detail & Related papers (2021-02-12T19:00:03Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Numerical continuum tensor networks in two dimensions [0.0]
We numerically determine wave functions of interacting two-dimensional fermionic models in the continuum limit.
We use two different tensor network states: one based on the numerical continuum limit of fermionic projected entangled pair states obtained via a tensor network formulation of multi-grid.
We first benchmark our approach on the two-dimensional free Fermi gas then proceed to study the two-dimensional interacting Fermi gas with an attractive interaction in the unitary limit.
arXiv Detail & Related papers (2020-08-24T17:08:39Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Joint Multi-Dimension Pruning via Numerical Gradient Update [120.59697866489668]
We present joint multi-dimension pruning (abbreviated as JointPruning), an effective method of pruning a network on three crucial aspects: spatial, depth and channel simultaneously.
We show that our method is optimized collaboratively across the three dimensions in a single end-to-end training and it is more efficient than the previous exhaustive methods.
arXiv Detail & Related papers (2020-05-18T17:57:09Z) - Revealing the Structure of Deep Neural Networks via Convex Duality [70.15611146583068]
We study regularized deep neural networks (DNNs) and introduce a convex analytic framework to characterize the structure of hidden layers.
We show that a set of optimal hidden layer weights for a norm regularized training problem can be explicitly found as the extreme points of a convex set.
We apply the same characterization to deep ReLU networks with whitened data and prove the same weight alignment holds.
arXiv Detail & Related papers (2020-02-22T21:13:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.