Isometric tensor network optimization for extensive Hamiltonians is free
of barren plateaus
- URL: http://arxiv.org/abs/2304.14320v2
- Date: Mon, 11 Mar 2024 22:24:32 GMT
- Title: Isometric tensor network optimization for extensive Hamiltonians is free
of barren plateaus
- Authors: Qiang Miao, Thomas Barthel
- Abstract summary: We show that there are no barren plateaus in the energy optimization of isometric tensor network states (TNS)
TNS are a promising route for an efficient quantum-computation-based investigation of strongly-correlated quantum matter.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explain why and numerically confirm that there are no barren plateaus in
the energy optimization of isometric tensor network states (TNS) for extensive
Hamiltonians with finite-range interactions which are, for example, typical in
condensed matter physics. Specifically, we consider matrix product states (MPS)
with open boundary conditions, tree tensor network states (TTNS), and the
multiscale entanglement renormalization ansatz (MERA). MERA are isometric by
construction and, for the MPS and TTNS, the tensor network gauge freedom allows
us to choose all tensors as partial isometries. The variance of the energy
gradient, evaluated by taking the Haar average over the TNS tensors, has a
leading system-size independent term and decreases according to a power law in
the bond dimension. For a hierarchical TNS (TTNS and MERA) with branching ratio
$b$, the variance of the gradient with respect to a tensor in layer $\tau$
scales as $(b\eta)^\tau$, where $\eta$ is the second largest eigenvalue of a
Haar-average doubled layer-transition channel and decreases algebraically with
increasing bond dimension. The absence of barren plateaus substantiates that
isometric TNS are a promising route for an efficient quantum-computation-based
investigation of strongly-correlated quantum matter. The observed scaling
properties of the gradient amplitudes bear implications for efficient TNS
initialization procedures.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Computational complexity of isometric tensor network states [0.0]
We map 2D isoTNS to 1+1D unitary quantum circuits.
We find an efficient classical algorithm to compute local expectation values in strongly injective isoTNS.
Our results can be used to design provable algorithms to contract isoTNS.
arXiv Detail & Related papers (2024-02-12T19:00:00Z) - Absence of barren plateaus and scaling of gradients in the energy optimization of isometric tensor network states [0.0]
We consider energy problems for quantum many-body systems with extensive Hamiltonians and finite-range interactions.
We prove that variational optimization problems for matrix product states, tree tensor networks, and the multiscale entanglement renormalization ansatz are free of barren plateaus.
arXiv Detail & Related papers (2023-03-31T22:49:49Z) - Barren plateaus in quantum tensor network optimization [0.0]
We analyze the variational optimization of quantum circuits inspired by matrix product states (qMPS), tree tensor networks (qTTN), and the multiscale entanglement renormalization ansatz (qMERA)
We show that the variance of the cost function gradient decreases exponentially with the distance of a Hamiltonian term from the canonical centre in the quantum tensor network.
arXiv Detail & Related papers (2022-09-01T08:42:35Z) - BiTAT: Neural Network Binarization with Task-dependent Aggregated
Transformation [116.26521375592759]
Quantization aims to transform high-precision weights and activations of a given neural network into low-precision weights/activations for reduced memory usage and computation.
Extreme quantization (1-bit weight/1-bit activations) of compactly-designed backbone architectures results in severe performance degeneration.
This paper proposes a novel Quantization-Aware Training (QAT) method that can effectively alleviate performance degeneration.
arXiv Detail & Related papers (2022-07-04T13:25:49Z) - Tensor Network States with Low-Rank Tensors [6.385624548310884]
We introduce the idea of imposing low-rank constraints on the tensors that compose the tensor network.
With this modification, the time and complexities for the network optimization can be substantially reduced.
We find that choosing the tensor rank $r$ to be on the order of the bond $m$, is sufficient to obtain high-accuracy groundstate approximations.
arXiv Detail & Related papers (2022-05-30T17:58:16Z) - Boundary theories of critical matchgate tensor networks [59.433172590351234]
Key aspects of the AdS/CFT correspondence can be captured in terms of tensor network models on hyperbolic lattices.
For tensors fulfilling the matchgate constraint, these have previously been shown to produce disordered boundary states.
We show that these Hamiltonians exhibit multi-scale quasiperiodic symmetries captured by an analytical toy model.
arXiv Detail & Related papers (2021-10-06T18:00:03Z) - On the closedness and geometry of tensor network state sets [5.989041429080286]
Network states (TNS) are a powerful approach for the study of strongly correlated quantum matter.
In practical algorithms, functionals like energy expectation values or overlaps are optimized over certain sets of TNS.
We show that sets of matrix product states (MPS) with open boundary conditions, tree tensor network states (TTNS), and the multiscale entanglement renormalization ansatz (MERA) are always closed.
arXiv Detail & Related papers (2021-07-30T18:09:28Z) - Dimension of Tensor Network varieties [68.8204255655161]
We determine an upper bound on the dimension of the tensor network variety.
A refined upper bound is given in cases relevant for applications such as varieties of matrix product states and projected entangled pairs states.
arXiv Detail & Related papers (2021-01-08T18:24:50Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.