Quantum Annealing Algorithms for Boolean Tensor Networks
- URL: http://arxiv.org/abs/2107.13659v4
- Date: Mon, 28 Mar 2022 02:47:30 GMT
- Title: Quantum Annealing Algorithms for Boolean Tensor Networks
- Authors: Elijah Pelofske, Georg Hahn, Daniel O'Malley, Hristo N. Djidjev, Boian
S. Alexandrov
- Abstract summary: We introduce and analyze three general algorithms for Boolean tensor networks.
We show can be expressed as a quadratic unconstrained binary optimization problem suitable for solving on a quantum annealer.
We demonstrate that tensor with up to millions of elements can be decomposed efficiently using a DWave 2000Q quantum annealer.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum annealers manufactured by D-Wave Systems, Inc., are computational
devices capable of finding high-quality solutions of NP-hard problems. In this
contribution, we explore the potential and effectiveness of such quantum
annealers for computing Boolean tensor networks. Tensors offer a natural way to
model high-dimensional data commonplace in many scientific fields, and
representing a binary tensor as a Boolean tensor network is the task of
expressing a tensor containing categorical (i.e., {0, 1}) values as a product
of low dimensional binary tensors. A Boolean tensor network is computed by
Boolean tensor decomposition, and it is usually not exact. The aim of such
decomposition is to minimize the given distance measure between the
high-dimensional input tensor and the product of lower-dimensional (usually
three-dimensional) tensors and matrices representing the tensor network. In
this paper, we introduce and analyze three general algorithms for Boolean
tensor networks: Tucker, Tensor Train, and Hierarchical Tucker networks. The
computation of a Boolean tensor network is reduced to a sequence of Boolean
matrix factorizations, which we show can be expressed as a quadratic
unconstrained binary optimization problem suitable for solving on a quantum
annealer. By using a novel method we introduce called \textit{parallel quantum
annealing}, we demonstrate that tensor with up to millions of elements can be
decomposed efficiently using a DWave 2000Q quantum annealer.
Related papers
- Positive bias makes tensor-network contraction tractable [0.5437298646956507]
tensor network contraction is a powerful computational tool in quantum many-body physics, quantum information and quantum chemistry.
Here, we study how the complexity of tensor-network contraction depends on a different notion of quantumness, namely, the sign structure of its entries.
arXiv Detail & Related papers (2024-10-07T18:27:23Z) - Compressing multivariate functions with tree tensor networks [0.0]
One-dimensional tensor networks are increasingly being used as a numerical ansatz for continuum functions.
We show how more structured tree tensor networks offer a significantly more efficient ansatz than the commonly used tensor train.
arXiv Detail & Related papers (2024-10-04T16:20:52Z) - Quick design of feasible tensor networks for constrained combinatorial optimization [1.8775413720750924]
In recent years, tensor networks have been applied to constrained optimization problems for practical applications.
One approach is to construct tensor networks with nilpotent-matrix manipulation.
The proposed method is expected to facilitate the discovery of feasible tensor networks for constrained optimization problems.
arXiv Detail & Related papers (2024-09-03T08:36:23Z) - Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Latent Matrices for Tensor Network Decomposition and to Tensor
Completion [8.301418317685906]
We propose a novel higher-order tensor decomposition model that decomposes the tensor into smaller ones and speeds up the computation of the algorithm.
Three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task.
Experimental results show that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
arXiv Detail & Related papers (2022-10-07T08:19:50Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Boolean Hierarchical Tucker Networks on Quantum Annealers [0.0]
We investigate the capabilities of the quantum annealers of D-Wave Systems, Inc.
We show that BHTN can be calculated as a sequence of optimization problems suitable for the D-Wave 2000Q quantum annealer.
Although current technology is still fairly restricted in the problems they can address, we show that a complex problem such as BHTN can be solved efficiently and accurately.
arXiv Detail & Related papers (2021-03-12T16:45:58Z) - SiMaN: Sign-to-Magnitude Network Binarization [165.5630656849309]
We show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise.
We prove that the learned weights of binarized networks roughly follow a Laplacian distribution that does not allow entropy.
Our method, dubbed sign-to- neural network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2021-02-16T07:03:51Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.