Quantum Alphatron: quantum advantage for learning with kernels and noise
- URL: http://arxiv.org/abs/2108.11670v5
- Date: Wed, 4 Oct 2023 07:28:01 GMT
- Title: Quantum Alphatron: quantum advantage for learning with kernels and noise
- Authors: Siyi Yang, Naixu Guo, Miklos Santha, Patrick Rebentrost
- Abstract summary: We provide quantum versions of the Alphatron in the fault-tolerant setting.
We discuss the quantum advantage in the context of learning of two-layer neural networks.
- Score: 2.94944680995069
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: At the interface of machine learning and quantum computing, an important
question is what distributions can be learned provably with optimal sample
complexities and with quantum-accelerated time complexities. In the classical
case, Klivans and Goel discussed the \textit{Alphatron}, an algorithm to learn
distributions related to kernelized regression, which they also applied to the
learning of two-layer neural networks. In this work, we provide quantum
versions of the Alphatron in the fault-tolerant setting. In a well-defined
learning model, this quantum algorithm is able to provide a polynomial speedup
for a large range of parameters of the underlying concept class. We discuss two
types of speedups, one for evaluating the kernel matrix and one for evaluating
the gradient in the stochastic gradient descent procedure. We also discuss the
quantum advantage in the context of learning of two-layer neural networks. Our
work contributes to the study of quantum learning with kernels and from
samples.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Quantum Machine Learning: Quantum Kernel Methods [0.0]
Kernel methods are a powerful and popular technique in classical Machine Learning.
The use of a quantum feature space that can only be calculated efficiently on a quantum computer potentially allows for deriving a quantum advantage.
A data dependent projected quantum kernel was shown to provide significant advantage over classical kernels.
arXiv Detail & Related papers (2024-05-02T23:45:29Z) - Quadratic speed-ups in quantum kernelized binary classification [1.3812010983144802]
Several quantum machine learning algorithms that use quantum kernels as a measure of similarities between data have emerged to perform binary classification on datasets encoded as quantum states.
We propose new quantum circuits for the QKCs in which the number of qubits is reduced by one, and the circuit depth is reduced linearly with respect to the number of sample data.
We verify the quadratic speed-up over previous methods through numerical simulations on the Iris dataset.
arXiv Detail & Related papers (2024-03-26T07:39:48Z) - Quantum Machine Learning: from physics to software engineering [58.720142291102135]
We show how classical machine learning approach can help improve the facilities of quantum computers.
We discuss how quantum algorithms and quantum computers may be useful for solving classical machine learning tasks.
arXiv Detail & Related papers (2023-01-04T23:37:45Z) - The Quantum Path Kernel: a Generalized Quantum Neural Tangent Kernel for
Deep Quantum Machine Learning [52.77024349608834]
Building a quantum analog of classical deep neural networks represents a fundamental challenge in quantum computing.
Key issue is how to address the inherent non-linearity of classical deep learning.
We introduce the Quantum Path Kernel, a formulation of quantum machine learning capable of replicating those aspects of deep machine learning.
arXiv Detail & Related papers (2022-12-22T16:06:24Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantum Annealing Formulation for Binary Neural Networks [40.99969857118534]
In this work, we explore binary neural networks, which are lightweight yet powerful models typically intended for resource constrained devices.
We devise a quadratic unconstrained binary optimization formulation for the training problem.
While the problem is intractable, i.e., the cost to estimate the binary weights scales exponentially with network size, we show how the problem can be optimized directly on a quantum annealer.
arXiv Detail & Related papers (2021-07-05T03:20:54Z) - The Hintons in your Neural Network: a Quantum Field Theory View of Deep
Learning [84.33745072274942]
We show how to represent linear and non-linear layers as unitary quantum gates, and interpret the fundamental excitations of the quantum model as particles.
On top of opening a new perspective and techniques for studying neural networks, the quantum formulation is well suited for optical quantum computing.
arXiv Detail & Related papers (2021-03-08T17:24:29Z) - Variational learning for quantum artificial neural networks [0.0]
We first review a series of recent works describing the implementation of artificial neurons and feed-forward neural networks on quantum processors.
We then present an original realization of efficient individual quantum nodes based on variational unsampling protocols.
While keeping full compatibility with the overall memory-efficient feed-forward architecture, our constructions effectively reduce the quantum circuit depth required to determine the activation probability of single neurons.
arXiv Detail & Related papers (2021-03-03T16:10:15Z) - Quantum Machine Learning for Particle Physics using a Variational
Quantum Classifier [0.0]
We propose a novel hybrid variational quantum classifier that combines the quantum gradient descent method with steepest gradient descent to optimise the parameters of the network.
We find that this algorithm has a better learning outcome than a classical neural network or a quantum machine learning method trained with a non-quantum optimisation method.
arXiv Detail & Related papers (2020-10-14T18:05:49Z) - Experimental Quantum Generative Adversarial Networks for Image
Generation [93.06926114985761]
We experimentally achieve the learning and generation of real-world hand-written digit images on a superconducting quantum processor.
Our work provides guidance for developing advanced quantum generative models on near-term quantum devices.
arXiv Detail & Related papers (2020-10-13T06:57:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.