Positive bias makes tensor-network contraction tractable
- URL: http://arxiv.org/abs/2410.05414v1
- Date: Mon, 7 Oct 2024 18:27:23 GMT
- Title: Positive bias makes tensor-network contraction tractable
- Authors: Jiaqing Jiang, Jielun Chen, Norbert Schuch, Dominik Hangleiter,
- Abstract summary: tensor network contraction is a powerful computational tool in quantum many-body physics, quantum information and quantum chemistry.
Here, we study how the complexity of tensor-network contraction depends on a different notion of quantumness, namely, the sign structure of its entries.
- Score: 0.5437298646956507
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor network contraction is a powerful computational tool in quantum many-body physics, quantum information and quantum chemistry. The complexity of contracting a tensor network is thought to mainly depend on its entanglement properties, as reflected by the Schmidt rank across bipartite cuts. Here, we study how the complexity of tensor-network contraction depends on a different notion of quantumness, namely, the sign structure of its entries. We tackle this question rigorously by investigating the complexity of contracting tensor networks whose entries have a positive bias. We show that for intermediate bond dimension d>~n, a small positive mean value >~1/d of the tensor entries already dramatically decreases the computational complexity of approximately contracting random tensor networks, enabling a quasi-polynomial time algorithm for arbitrary 1/poly(n) multiplicative approximation. At the same time exactly contracting such tensor networks remains #P-hard, like for the zero-mean case [HHEG20]. The mean value 1/d matches the phase transition point observed in [CJHS24]. Our proof makes use of Barvinok's method for approximate counting and the technique of mapping random instances to statistical mechanical models. We further consider the worst-case complexity of approximate contraction of positive tensor networks, where all entries are non-negative. We first give a simple proof showing that a multiplicative approximation with error exponentially close to one is at least StoqMA-hard. We then show that when considering additive error in the matrix 1-norm, the contraction of positive tensor network is BPP-Complete. This result compares to Arad and Landau's [AL10] result, which shows that for general tensor networks, approximate contraction up to matrix 2-norm additive error is BQP-Complete.
Related papers
- Sign problem in tensor network contraction [0.5437298646956507]
We study how the difficulty of contracting tensor networks depends on the sign structure of the tensor entries.
We find that the transition from hard to easy occurs when the entries become predominantly positive.
We also investigate the computational difficulty of computing expectation values of tensor network wavefunctions.
arXiv Detail & Related papers (2024-04-29T18:06:38Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - One-step replica symmetry breaking in the language of tensor networks [0.913755431537592]
We develop an exact mapping between the one-step replica symmetry breaking cavity method and tensor networks.
The two schemes come with complementary mathematical and numerical toolboxes that could be leveraged to improve the respective states of the art.
arXiv Detail & Related papers (2023-06-26T18:42:51Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Stack operation of tensor networks [10.86105335102537]
We propose a mathematically rigorous definition for the tensor network stack approach.
We illustrate the main ideas with the matrix product states based machine learning as an example.
arXiv Detail & Related papers (2022-03-28T12:45:13Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Spectral Complexity-scaled Generalization Bound of Complex-valued Neural
Networks [78.64167379726163]
This paper is the first work that proves a generalization bound for the complex-valued neural network.
We conduct experiments by training complex-valued convolutional neural networks on different datasets.
arXiv Detail & Related papers (2021-12-07T03:25:25Z) - Quantum Annealing Algorithms for Boolean Tensor Networks [0.0]
We introduce and analyze three general algorithms for Boolean tensor networks.
We show can be expressed as a quadratic unconstrained binary optimization problem suitable for solving on a quantum annealer.
We demonstrate that tensor with up to millions of elements can be decomposed efficiently using a DWave 2000Q quantum annealer.
arXiv Detail & Related papers (2021-07-28T22:38:18Z) - MTC: Multiresolution Tensor Completion from Partial and Coarse
Observations [49.931849672492305]
Existing completion formulation mostly relies on partial observations from a single tensor.
We propose an efficient Multi-resolution Completion model (MTC) to solve the problem.
arXiv Detail & Related papers (2021-06-14T02:20:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.