Bandwidth Enables Generalization in Quantum Kernel Models
- URL: http://arxiv.org/abs/2206.06686v3
- Date: Sun, 18 Jun 2023 17:21:58 GMT
- Title: Bandwidth Enables Generalization in Quantum Kernel Models
- Authors: Abdulkadir Canatar, Evan Peters, Cengiz Pehlevan, Stefan M. Wild,
Ruslan Shaydulin
- Abstract summary: Recent results show that generalization of quantum models is hindered by the exponential size of the quantum feature space.
We show that changing the value of the bandwidth can take a model from provably not being able to generalize to any target function to good generalization for well-aligned targets.
- Score: 16.940180366663903
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum computers are known to provide speedups over classical
state-of-the-art machine learning methods in some specialized settings. For
example, quantum kernel methods have been shown to provide an exponential
speedup on a learning version of the discrete logarithm problem. Understanding
the generalization of quantum models is essential to realizing similar speedups
on problems of practical interest. Recent results demonstrate that
generalization is hindered by the exponential size of the quantum feature
space. Although these results suggest that quantum models cannot generalize
when the number of qubits is large, in this paper we show that these results
rely on overly restrictive assumptions. We consider a wider class of models by
varying a hyperparameter that we call quantum kernel bandwidth. We analyze the
large-qubit limit and provide explicit formulas for the generalization of a
quantum model that can be solved in closed form. Specifically, we show that
changing the value of the bandwidth can take a model from provably not being
able to generalize to any target function to good generalization for
well-aligned targets. Our analysis shows how the bandwidth controls the
spectrum of the kernel integral operator and thereby the inductive bias of the
model. We demonstrate empirically that our theory correctly predicts how
varying the bandwidth affects generalization of quantum models on challenging
datasets, including those far outside our theoretical assumptions. We discuss
the implications of our results for quantum advantage in machine learning.
Related papers
- Generalized geometric speed limits for quantum observables [1.451121761055173]
We derive generalized quantum speed limits on the rate of change of the expectation values of observables.
These bounds subsume and, for Hilbert space dimension $geq 3$, tighten existing bounds.
The generalized bounds can be used to design "fast" Hamiltonians that enable the rapid driving of the expectation values of observables.
arXiv Detail & Related papers (2024-09-06T18:15:58Z) - The curse of random quantum data [62.24825255497622]
We quantify the performances of quantum machine learning in the landscape of quantum data.
We find that the training efficiency and generalization capabilities in quantum machine learning will be exponentially suppressed with the increase in qubits.
Our findings apply to both the quantum kernel method and the large-width limit of quantum neural networks.
arXiv Detail & Related papers (2024-08-19T12:18:07Z) - Power Characterization of Noisy Quantum Kernels [52.47151453259434]
We show that noise may make quantum kernel methods to only have poor prediction capability, even when the generalization error is small.
We provide a crucial warning to employ noisy quantum kernel methods for quantum computation.
arXiv Detail & Related papers (2024-01-31T01:02:16Z) - Quantum Kernel Machine Learning With Continuous Variables [0.0]
The popular qubit framework has dominated recent work on quantum kernel machine learning.
There is no comparative framework to understand these concepts for continuous variable (CV) quantum computing platforms.
arXiv Detail & Related papers (2024-01-11T03:49:40Z) - Expressibility-induced Concentration of Quantum Neural Tangent Kernels [4.561685127984694]
We study the connections between the trainability and expressibility of quantum tangent kernel models.
For global loss functions, we rigorously prove that high expressibility of both the global and local quantum encodings can lead to exponential concentration of quantum tangent kernel values to zero.
Our discoveries unveil a pivotal characteristic of quantum neural tangent kernels, offering valuable insights for the design of wide quantum variational circuit models.
arXiv Detail & Related papers (2023-11-08T19:00:01Z) - Numerical evidence against advantage with quantum fidelity kernels on
classical data [12.621805903645711]
We show that quantum kernels suffer from exponential "flattening" of the spectrum as the number of qubits grows.
We provide extensive numerical evidence for this phenomenon utilizing multiple previously studied quantum feature maps and both synthetic and real data.
Our results show that unless novel techniques are developed to control the inductive bias of quantum kernels, they are unlikely to provide a quantum advantage on classical data.
arXiv Detail & Related papers (2022-11-29T19:23:11Z) - Theory of Quantum Generative Learning Models with Maximum Mean
Discrepancy [67.02951777522547]
We study learnability of quantum circuit Born machines (QCBMs) and quantum generative adversarial networks (QGANs)
We first analyze the generalization ability of QCBMs and identify their superiorities when the quantum devices can directly access the target distribution.
Next, we prove how the generalization error bound of QGANs depends on the employed Ansatz, the number of qudits, and input states.
arXiv Detail & Related papers (2022-05-10T08:05:59Z) - Noisy Quantum Kernel Machines [58.09028887465797]
An emerging class of quantum learning machines is that based on the paradigm of quantum kernels.
We study how dissipation and decoherence affect their performance.
We show that decoherence and dissipation can be seen as an implicit regularization for the quantum kernel machines.
arXiv Detail & Related papers (2022-04-26T09:52:02Z) - Generalization Metrics for Practical Quantum Advantage in Generative
Models [68.8204255655161]
Generative modeling is a widely accepted natural use case for quantum computers.
We construct a simple and unambiguous approach to probe practical quantum advantage for generative modeling by measuring the algorithm's generalization performance.
Our simulation results show that our quantum-inspired models have up to a $68 times$ enhancement in generating unseen unique and valid samples.
arXiv Detail & Related papers (2022-01-21T16:35:35Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.