Fast Tensor Disentangling Algorithm
- URL: http://arxiv.org/abs/2104.08283v4
- Date: Sat, 22 Jan 2022 05:59:12 GMT
- Title: Fast Tensor Disentangling Algorithm
- Authors: Kevin Slagle
- Abstract summary: We introduce an approximate, fast, and simple algorithm to optimize disentangling unitary tensors.
Our algorithm is faster than previous iterative algorithms and often results in a residual entanglement entropy that is within 10 to 40% of the minimum.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many recent tensor network algorithms apply unitary operators to parts of a
tensor network in order to reduce entanglement. However, many of the previously
used iterative algorithms to minimize entanglement can be slow. We introduce an
approximate, fast, and simple algorithm to optimize disentangling unitary
tensors. Our algorithm is asymptotically faster than previous iterative
algorithms and often results in a residual entanglement entropy that is within
10 to 40% of the minimum. For certain input tensors, our algorithm returns an
optimal solution. When disentangling order-4 tensors with equal bond
dimensions, our algorithm achieves an entanglement spectrum where nearly half
of the singular values are zero. We further validate our algorithm by showing
that it can efficiently disentangle random 1D states of qubits.
Related papers
- Accelerating Sinkhorn Algorithm with Sparse Newton Iterations [14.094908995798757]
We present Sinkhorn-Newton-Sparse (SNS), an extension to the Sinkhorn algorithm.
SNS converges orders magnitude faster across a wide range of practical cases.
arXiv Detail & Related papers (2024-01-20T21:23:09Z) - Learning the Positions in CountSketch [49.57951567374372]
We consider sketching algorithms which first compress data by multiplication with a random sketch matrix, and then apply the sketch to quickly solve an optimization problem.
In this work, we propose the first learning-based algorithms that also optimize the locations of the non-zero entries.
arXiv Detail & Related papers (2023-06-11T07:28:35Z) - Deterministic Nonsmooth Nonconvex Optimization [94.01526844386977]
We show that randomization is necessary to obtain a dimension-free dimension-free algorithm.
Our algorithm yields the first deterministic dimension-free algorithm for optimizing ReLU networks.
arXiv Detail & Related papers (2023-02-16T13:57:19Z) - Improved Rate of First Order Algorithms for Entropic Optimal Transport [2.1485350418225244]
This paper improves the state-of-the-art rate of a first-order algorithm for solving entropy regularized optimal transport.
We propose an accelerated primal-dual mirror descent algorithm with variance reduction.
Our algorithm may inspire more research to develop accelerated primal-dual algorithms that have rate $widetildeO(n2/epsilon)$ for solving OT.
arXiv Detail & Related papers (2023-01-23T19:13:25Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - A Robust Spectral Algorithm for Overcomplete Tensor Decomposition [10.742675209112623]
We give a spectral algorithm for decomposing overcomplete order-4 tensors.
Our algorithm is robust to adversarial perturbations of bounded spectral norm.
Our algorithm avoids semidefinite programming and may be implemented as a series of basic linear-algebraic operations.
arXiv Detail & Related papers (2022-03-05T17:25:37Z) - Fast algorithm for overcomplete order-3 tensor decomposition [7.638467133689933]
We develop the first fast spectral algorithm to decompose a random third-order tensor over Rd of rank up to O(d3/2/polylog(d))
Our algorithm can recover all components in time O(d6.05) under the current matrix multiplication time.
arXiv Detail & Related papers (2022-02-14T00:31:34Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Streaming Coresets for Symmetric Tensor Factorization [9.181791777532608]
We show how to factorize tensors efficiently in the streaming setting.
We introduce two novel algorithmic techniques: online filtering and kernelization.
We show applications of our algorithms in learning single topic modeling.
arXiv Detail & Related papers (2020-06-01T19:55:34Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z) - Lagrangian Decomposition for Neural Network Verification [148.0448557991349]
A fundamental component of neural network verification is the computation of bounds on the values their outputs can take.
We propose a novel approach based on Lagrangian Decomposition.
We show that we obtain bounds comparable with off-the-shelf solvers in a fraction of their running time.
arXiv Detail & Related papers (2020-02-24T17:55:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.