Tree tensor network states represent low-energy states faithfully
- URL: http://arxiv.org/abs/2512.20215v1
- Date: Tue, 23 Dec 2025 10:15:45 GMT
- Title: Tree tensor network states represent low-energy states faithfully
- Authors: Thomas Barthel,
- Abstract summary: It is shown how the approximation error of tree tensor network states (TTNS) can be bounded using Schmidt spectra or Rényi entanglement entropies of the target quantum state.<n>For tree lattices, the result implies that efficient TTNS approximations exist if $1$ Rényi entanglement entropies for single-branch cuts obey an area law.
- Score: 2.538209532048867
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extending corresponding results for matrix product states [Verstraete and Cirac, PRB 73, 094423 (2006); Schuch et al. PRL 100, 030504 (2008)], it is shown how the approximation error of tree tensor network states (TTNS) can be bounded using Schmidt spectra or Rényi entanglement entropies of the target quantum state. Conversely, one obtains bounds on TTNS bond dimensions needed to achieve a specific approximation accuracy. For tree lattices, the result implies that efficient TTNS approximations exist if $α<1$ Rényi entanglement entropies for single-branch cuts obey an area law, as in ground and low-energy states of certain gapped systems.
Related papers
- Performance Guarantees for Quantum Neural Estimation of Entropies [31.955071410400947]
Quantum neural estimators (QNEs) combine classical neural networks with parametrized quantum circuits.<n>We study formal guarantees for QNEs of measured relative entropies in the form of non-asymptotic error risk bounds.<n>Our theory aims to facilitate principled implementation of QNEs for measured relative entropies.
arXiv Detail & Related papers (2025-11-24T16:36:06Z) - Excited states from local effective Hamiltonians of matrix product states and their entanglement spectrum transition [18.458863288479844]
We provide a conformal field theory perspective that helps elucidate this connection.<n>We predict an entanglement-spectrum transition of excited states as the ratio of the subsystem size to the total system size is varied.<n>Our numerical results support this picture and demonstrate a reorganization of the entanglement spectrum into distinct conformal towers as this ratio changes.
arXiv Detail & Related papers (2025-11-20T19:00:38Z) - Representational power of selected neural network quantum states in second quantization [25.189464210147875]
We generalize the Boltzmann machine to a more general class of states for Fermions, formed by product of neurons' and hence will be referred to as neuron product states (NPS)<n> NPS builds correlation in a very different way, compared with the closely related correlator product states (CPS)<n>We prove that products of such simple nonlocal correlators can approximate any wavefunction arbitrarily well under certain mild conditions on the form of activation functions.
arXiv Detail & Related papers (2025-11-07T02:24:24Z) - Emergent statistical mechanics in holographic random tensor networks [41.99844472131922]
We show that RTN states equilibrate at large bond dimension and also in the scaling limit for three classes of geometries.<n>We reproduce a holographic degree-of-freedom counting for the effective dimension of each system.<n>These results demonstrate that RTN techniques can probe aspects of late-time dynamics of quantum many-body phases.
arXiv Detail & Related papers (2025-08-22T17:49:49Z) - The Augmented Tree Tensor Network Cookbook [0.0]
An augmented tree tensor network (aTTN) is a tensor network ansatz constructed by applying a layer of unitary disentanglers to a tree tensor network.<n>These lecture notes serve as a detailed guide for implementing the aTTN algorithms.
arXiv Detail & Related papers (2025-07-28T18:00:39Z) - Joint State-Channel Decoupling and One-Shot Quantum Coding Theorem [16.05946478325466]
We propose a joint state-channel decoupling approach to obtain a one-shot error exponent bound without smoothing.
We establish a one-shot error exponent bound for quantum channel coding given by a sandwiched R'enyi coherent information.
arXiv Detail & Related papers (2024-09-23T15:59:16Z) - Linear Circuit Synthesis using Weighted Steiner Trees [45.11082946405984]
CNOT circuits are a common building block of general quantum circuits.
This article presents state-of-the-art algorithms for optimizing the number of CNOT gates.
A simulated evaluation shows that the suggested is almost always beneficial and reduces the number of CNOT gates by up to 10%.
arXiv Detail & Related papers (2024-08-07T19:51:22Z) - Conditional Independence of 1D Gibbs States with Applications to Efficient Learning [2.9360071145551068]
We show that spin chains in thermal equilibrium have a correlation structure in which individual regions are strongly correlated at most with their near vicinity.<n>We prove that these measures decay superexponentially at every positive temperature.
arXiv Detail & Related papers (2024-02-28T17:28:01Z) - Computational complexity of isometric tensor network states [0.0]
We map 2D isoTNS to 1+1D unitary quantum circuits.<n>We find an efficient classical algorithm to compute local expectation values in strongly injective isoTNS.<n>Our results can be used to design provable algorithms to contract isoTNS.
arXiv Detail & Related papers (2024-02-12T19:00:00Z) - Quantitative CLTs in Deep Neural Networks [12.845031126178593]
We study the distribution of a fully connected neural network with random Gaussian weights and biases.
We obtain quantitative bounds on normal approximations valid at large but finite $n$ and any fixed network depth.
Our bounds are strictly stronger in terms of their dependence on network width than any previously available in the literature.
arXiv Detail & Related papers (2023-07-12T11:35:37Z) - On the Neural Tangent Kernel Analysis of Randomly Pruned Neural Networks [91.3755431537592]
We study how random pruning of the weights affects a neural network's neural kernel (NTK)
In particular, this work establishes an equivalence of the NTKs between a fully-connected neural network and its randomly pruned version.
arXiv Detail & Related papers (2022-03-27T15:22:19Z) - Building separable approximations for quantum states via neural networks [0.0]
We parametrize separable states with a neural network and train it to minimize the distance to a given target state.
By examining the output of the algorithm, we can deduce whether the target state is entangled or not, and construct an approximation for its closest separable state.
We show our method to be efficient in the multipartite case, considering different notions of separability.
arXiv Detail & Related papers (2021-12-15T11:50:25Z) - On the closedness and geometry of tensor network state sets [5.989041429080286]
Network states (TNS) are a powerful approach for the study of strongly correlated quantum matter.
In practical algorithms, functionals like energy expectation values or overlaps are optimized over certain sets of TNS.
We show that sets of matrix product states (MPS) with open boundary conditions, tree tensor network states (TTNS), and the multiscale entanglement renormalization ansatz (MERA) are always closed.
arXiv Detail & Related papers (2021-07-30T18:09:28Z) - Neural tensor contractions and the expressive power of deep neural
quantum states [17.181118551107453]
We establish a direct connection between general tensor networks and deep feed-forward artificial neural networks.
We show that neural-network states have strictly the same or higher expressive power than practically usable variational tensor networks.
arXiv Detail & Related papers (2021-03-18T14:47:38Z) - Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing
its Gradient Estimator Bias [65.13042449121411]
In practice, training a network with the gradient estimates provided by EP does not scale to visual tasks harder than MNIST.
We show that a bias in the gradient estimate of EP, inherent in the use of finite nudging, is responsible for this phenomenon.
We apply these techniques to train an architecture with asymmetric forward and backward connections, yielding a 13.2% test error.
arXiv Detail & Related papers (2020-06-06T09:36:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.