Kernel Matrix Completion for Offline Quantum-Enhanced Machine Learning
- URL: http://arxiv.org/abs/2112.08449v1
- Date: Wed, 15 Dec 2021 19:44:39 GMT
- Title: Kernel Matrix Completion for Offline Quantum-Enhanced Machine Learning
- Authors: Annie Naveh, Imogen Fitzgerald, Anna Phan, Andrew Lockwood, and Travis
L. Scholten
- Abstract summary: We show quantum kernel matrices can be extended to incorporate new data using a classical (chordal-graph-based) matrix completion algorithm.
The minimal sample complexity needed for perfect completion is dependent on matrix rank.
On a real-world, industrially-relevant data set, the completion error behaves gracefully even when the minimal sample complexity is not reached.
- Score: 0.09786690381850353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Enhancing classical machine learning (ML) algorithms through quantum kernels
is a rapidly growing research topic in quantum machine learning (QML). A key
challenge in using kernels -- both classical and quantum -- is that ML
workflows involve acquiring new observations, for which new kernel values need
to be calculated. Transferring data back-and-forth between where the new
observations are generated & a quantum computer incurs a time delay; this delay
may exceed the timescales relevant for using the QML algorithm in the first
place. In this work, we show quantum kernel matrices can be extended to
incorporate new data using a classical (chordal-graph-based) matrix completion
algorithm. The minimal sample complexity needed for perfect completion is
dependent on matrix rank. We empirically show that (a) quantum kernel matrices
can be completed using this algorithm when the minimal sample complexity is
met, (b) the error of the completion degrades gracefully in the presence of
finite-sampling noise, and (c) the rank of quantum kernel matrices depends
weakly on the expressibility of the quantum feature map generating the kernel.
Further, on a real-world, industrially-relevant data set, the completion error
behaves gracefully even when the minimal sample complexity is not reached.
Related papers
- QUACK: Quantum Aligned Centroid Kernel [0.0]
We introduce QUACK, a quantum kernel algorithm whose time complexity scales linear with the number of samples during training.
Our algorithm is able to handle high-dimensional datasets such as MNIST with 784 features without any dimensionality reduction.
arXiv Detail & Related papers (2024-05-01T04:00:09Z) - Randomized semi-quantum matrix processing [0.0]
We present a hybrid quantum-classical framework for Monte-Carlo simulation of generic matrix functions.
Our framework provides a pathway towards early fault-tolerant quantum linear algebra applications.
arXiv Detail & Related papers (2023-07-21T18:00:28Z) - Robust Dequantization of the Quantum Singular value Transformation and
Quantum Machine Learning Algorithms [0.0]
We show how many techniques from randomized linear algebra can be adapted to work under this weaker assumption.
We also apply these results to obtain a robust dequantization of many quantum machine learning algorithms.
arXiv Detail & Related papers (2023-04-11T02:09:13Z) - Quantum Clustering with k-Means: a Hybrid Approach [117.4705494502186]
We design, implement, and evaluate three hybrid quantum k-Means algorithms.
We exploit quantum phenomena to speed up the computation of distances.
We show that our hybrid quantum k-Means algorithms can be more efficient than the classical version.
arXiv Detail & Related papers (2022-12-13T16:04:16Z) - Quantum Worst-Case to Average-Case Reductions for All Linear Problems [66.65497337069792]
We study the problem of designing worst-case to average-case reductions for quantum algorithms.
We provide an explicit and efficient transformation of quantum algorithms that are only correct on a small fraction of their inputs into ones that are correct on all inputs.
arXiv Detail & Related papers (2022-12-06T22:01:49Z) - Noisy Quantum Kernel Machines [58.09028887465797]
An emerging class of quantum learning machines is that based on the paradigm of quantum kernels.
We study how dissipation and decoherence affect their performance.
We show that decoherence and dissipation can be seen as an implicit regularization for the quantum kernel machines.
arXiv Detail & Related papers (2022-04-26T09:52:02Z) - Quantum Semi-Supervised Kernel Learning [4.726777092009554]
We present a quantum machine learning algorithm for training Semi-Supervised Kernel Support Vector Machines.
We show that it maintains the same speedup as the fully-supervised Quantum LS-SVM.
arXiv Detail & Related papers (2022-04-22T13:39:55Z) - An Algebraic Quantum Circuit Compression Algorithm for Hamiltonian
Simulation [55.41644538483948]
Current generation noisy intermediate-scale quantum (NISQ) computers are severely limited in chip size and error rates.
We derive localized circuit transformations to efficiently compress quantum circuits for simulation of certain spin Hamiltonians known as free fermions.
The proposed numerical circuit compression algorithm behaves backward stable and scales cubically in the number of spins enabling circuit synthesis beyond $mathcalO(103)$ spins.
arXiv Detail & Related papers (2021-08-06T19:38:03Z) - Quantum Machine Learning For Classical Data [0.0]
We study the intersection of quantum computing and supervised machine learning algorithms.
In particular, we investigate what extent quantum computers can be used to accelerate supervised machine learning algorithms.
arXiv Detail & Related papers (2021-05-08T12:11:44Z) - Efficient Algorithms for Causal Order Discovery in Quantum Networks [44.356294905844834]
Given black-box access to the input and output systems, we develop the first efficient quantum causal order discovery algorithm.
We model the causal order with quantum combs, and our algorithms output the order of inputs and outputs that the given process is compatible with.
Our algorithms will provide efficient ways to detect and optimize available transmission paths in quantum communication networks.
arXiv Detail & Related papers (2020-12-03T07:12:08Z) - Statistical Limits of Supervised Quantum Learning [90.0289160657379]
We show that if the bound on the accuracy is taken into account, quantum machine learning algorithms for supervised learning cannot achieve polylogarithmic runtimes in the input dimension.
We conclude that, when no further assumptions on the problem are made, quantum machine learning algorithms for supervised learning can have at most speedups over efficient classical algorithms.
arXiv Detail & Related papers (2020-01-28T17:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.