Quantum Annealing Algorithms for Boolean Tensor Networks
- URL: http://arxiv.org/abs/2107.13659v4
- Date: Mon, 28 Mar 2022 02:47:30 GMT
- Title: Quantum Annealing Algorithms for Boolean Tensor Networks
- Authors: Elijah Pelofske, Georg Hahn, Daniel O'Malley, Hristo N. Djidjev, Boian
S. Alexandrov
- Abstract summary: We introduce and analyze three general algorithms for Boolean tensor networks.
We show can be expressed as a quadratic unconstrained binary optimization problem suitable for solving on a quantum annealer.
We demonstrate that tensor with up to millions of elements can be decomposed efficiently using a DWave 2000Q quantum annealer.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum annealers manufactured by D-Wave Systems, Inc., are computational
devices capable of finding high-quality solutions of NP-hard problems. In this
contribution, we explore the potential and effectiveness of such quantum
annealers for computing Boolean tensor networks. Tensors offer a natural way to
model high-dimensional data commonplace in many scientific fields, and
representing a binary tensor as a Boolean tensor network is the task of
expressing a tensor containing categorical (i.e., {0, 1}) values as a product
of low dimensional binary tensors. A Boolean tensor network is computed by
Boolean tensor decomposition, and it is usually not exact. The aim of such
decomposition is to minimize the given distance measure between the
high-dimensional input tensor and the product of lower-dimensional (usually
three-dimensional) tensors and matrices representing the tensor network. In
this paper, we introduce and analyze three general algorithms for Boolean
tensor networks: Tucker, Tensor Train, and Hierarchical Tucker networks. The
computation of a Boolean tensor network is reduced to a sequence of Boolean
matrix factorizations, which we show can be expressed as a quadratic
unconstrained binary optimization problem suitable for solving on a quantum
annealer. By using a novel method we introduce called \textit{parallel quantum
annealing}, we demonstrate that tensor with up to millions of elements can be
decomposed efficiently using a DWave 2000Q quantum annealer.
Related papers
- Tensor cumulants for statistical inference on invariant distributions [49.80012009682584]
We show that PCA becomes computationally hard at a critical value of the signal's magnitude.
We define a new set of objects, which provide an explicit, near-orthogonal basis for invariants of a given degree.
It also lets us analyze a new problem of distinguishing between different ensembles.
arXiv Detail & Related papers (2024-04-29T14:33:24Z) - Low-Rank Tensor Function Representation for Multi-Dimensional Data
Recovery [52.21846313876592]
Low-rank tensor function representation (LRTFR) can continuously represent data beyond meshgrid with infinite resolution.
We develop two fundamental concepts for tensor functions, i.e., the tensor function rank and low-rank tensor function factorization.
Our method substantiates the superiority and versatility of our method as compared with state-of-the-art methods.
arXiv Detail & Related papers (2022-12-01T04:00:38Z) - Latent Matrices for Tensor Network Decomposition and to Tensor
Completion [8.301418317685906]
We propose a novel higher-order tensor decomposition model that decomposes the tensor into smaller ones and speeds up the computation of the algorithm.
Three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task.
Experimental results show that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
arXiv Detail & Related papers (2022-10-07T08:19:50Z) - Error Analysis of Tensor-Train Cross Approximation [88.83467216606778]
We provide accuracy guarantees in terms of the entire tensor for both exact and noisy measurements.
Results are verified by numerical experiments, and may have important implications for the usefulness of cross approximations for high-order tensors.
arXiv Detail & Related papers (2022-07-09T19:33:59Z) - Tensor networks in machine learning [0.0]
A tensor network is a decomposition used to express and approximate large arrays of data.
A merger of tensor networks with machine learning is natural.
Herein the network parameters are adjusted to learn or classify a data-set.
arXiv Detail & Related papers (2022-07-06T18:00:00Z) - Tensor Networks for Simulating Quantum Circuits on FPGAs [0.0]
Most research in quantum computing today is performed against simulations of quantum computers rather than true quantum computers.
One way to accelerate such a simulation is to use field programmable gate array (FPGA) hardware to efficiently compute the matrix multiplications.
One way to potentially reduce the memory footprint of a quantum computing system is to represent it as a tensor network.
arXiv Detail & Related papers (2021-08-15T22:43:38Z) - Boolean Hierarchical Tucker Networks on Quantum Annealers [0.0]
We investigate the capabilities of the quantum annealers of D-Wave Systems, Inc.
We show that BHTN can be calculated as a sequence of optimization problems suitable for the D-Wave 2000Q quantum annealer.
Although current technology is still fairly restricted in the problems they can address, we show that a complex problem such as BHTN can be solved efficiently and accurately.
arXiv Detail & Related papers (2021-03-12T16:45:58Z) - SiMaN: Sign-to-Magnitude Network Binarization [165.5630656849309]
We show that our weight binarization provides an analytical solution by encoding high-magnitude weights into +1s, and 0s otherwise.
We prove that the learned weights of binarized networks roughly follow a Laplacian distribution that does not allow entropy.
Our method, dubbed sign-to- neural network binarization (SiMaN), is evaluated on CIFAR-10 and ImageNet.
arXiv Detail & Related papers (2021-02-16T07:03:51Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - T-Basis: a Compact Representation for Neural Networks [89.86997385827055]
We introduce T-Basis, a concept for a compact representation of a set of tensors, each of an arbitrary shape, which is often seen in Neural Networks.
We evaluate the proposed approach on the task of neural network compression and demonstrate that it reaches high compression rates at acceptable performance drops.
arXiv Detail & Related papers (2020-07-13T19:03:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.