Quantum Kerr Learning
- URL: http://arxiv.org/abs/2205.12004v1
- Date: Fri, 20 May 2022 21:45:29 GMT
- Title: Quantum Kerr Learning
- Authors: Junyu Liu, Changchun Zhong, Matthew Otten, Cristian L. Cortes,
Chaoyang Ti, Stephen K Gray, Xu Han
- Abstract summary: We argue that a single Kerr mode might provide some extra quantum enhancements when using quantum kernel methods.
A detailed study using the kernel method, neural tangent kernel theory, first-order perturbation theory of the Kerr non-linearity, and non-perturbative numerical simulations, shows quantum enhancements could happen.
- Score: 10.109956955906718
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum machine learning is a rapidly evolving area that could facilitate
important applications for quantum computing and significantly impact data
science. In our work, we argue that a single Kerr mode might provide some extra
quantum enhancements when using quantum kernel methods based on various reasons
from complexity theory and physics. Furthermore, we establish an experimental
protocol, which we call \emph{quantum Kerr learning} based on circuit QED. A
detailed study using the kernel method, neural tangent kernel theory,
first-order perturbation theory of the Kerr non-linearity, and non-perturbative
numerical simulations, shows quantum enhancements could happen in terms of the
convergence time and the generalization error, while explicit protocols are
also constructed for higher-dimensional input data.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Separable Power of Classical and Quantum Learning Protocols Through the Lens of No-Free-Lunch Theorem [70.42372213666553]
The No-Free-Lunch (NFL) theorem quantifies problem- and data-independent generalization errors regardless of the optimization process.
We categorize a diverse array of quantum learning algorithms into three learning protocols designed for learning quantum dynamics under a specified observable.
Our derived NFL theorems demonstrate quadratic reductions in sample complexity across CLC-LPs, ReQu-LPs, and Qu-LPs.
We attribute this performance discrepancy to the unique capacity of quantum-related learning protocols to indirectly utilize information concerning the global phases of non-orthogonal quantum states.
arXiv Detail & Related papers (2024-05-12T09:05:13Z) - A Kerr kernel quantum learning machine [0.0]
We propose a quantum hardware kernel implementation scheme based on superconducting quantum circuits.
The scheme does not use qubits or quantum circuits but rather exploits the analogue features of Kerr modes.
arXiv Detail & Related papers (2024-04-02T09:50:33Z) - Power Characterization of Noisy Quantum Kernels [52.47151453259434]
We show that noise may make quantum kernel methods to only have poor prediction capability, even when the generalization error is small.
We provide a crucial warning to employ noisy quantum kernel methods for quantum computation.
arXiv Detail & Related papers (2024-01-31T01:02:16Z) - Hamiltonian Encoding for Quantum Approximate Time Evolution of Kinetic
Energy Operator [2.184775414778289]
The time evolution operator plays a crucial role in the precise computation of chemical experiments on quantum computers.
We have proposed a new encoding method, namely quantum approximate time evolution (QATE) for the quantum implementation of the kinetic energy operator.
arXiv Detail & Related papers (2023-10-05T05:25:38Z) - The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for
Deep Quantum Machine Learning [52.77024349608834]
Building a quantum analog of classical deep neural networks represents a fundamental challenge in quantum computing.
Key issue is how to address the inherent non-linearity of classical deep learning.
We introduce the Quantum Path Kernel, a formulation of quantum machine learning capable of replicating those aspects of deep machine learning.
arXiv Detail & Related papers (2022-12-22T16:06:24Z) - Quantum tangent kernel [0.8921166277011345]
In this work, we explore a quantum machine learning model with a deep parameterized quantum circuit.
We find that parameters of a deep enough quantum circuit do not move much from its initial values during training.
Such a deep variational quantum machine learning can be described by another emergent kernel, quantum tangent kernel.
arXiv Detail & Related papers (2021-11-04T15:38:52Z) - Towards understanding the power of quantum kernels in the NISQ era [79.8341515283403]
We show that the advantage of quantum kernels is vanished for large size datasets, few number of measurements, and large system noise.
Our work provides theoretical guidance of exploring advanced quantum kernels to attain quantum advantages on NISQ devices.
arXiv Detail & Related papers (2021-03-31T02:41:36Z) - Quantum machine learning models are kernel methods [0.0]
This technical manuscript summarises, formalises and extends the link by systematically rephrasing quantum models as a kernel method.
It shows that most near-term and fault-tolerant quantum models can be replaced by a general support vector machine.
In particular, kernel-based training is guaranteed to find better or equally good quantum models than variational circuit training.
arXiv Detail & Related papers (2021-01-26T19:00:04Z) - Information Scrambling in Computationally Complex Quantum Circuits [56.22772134614514]
We experimentally investigate the dynamics of quantum scrambling on a 53-qubit quantum processor.
We show that while operator spreading is captured by an efficient classical model, operator entanglement requires exponentially scaled computational resources to simulate.
arXiv Detail & Related papers (2021-01-21T22:18:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.