Quadratic Quantum Speedup for Perceptron Training
- URL: http://arxiv.org/abs/2109.04695v1
- Date: Fri, 10 Sep 2021 06:50:57 GMT
- Title: Quadratic Quantum Speedup for Perceptron Training
- Authors: Pengcheng Liao, Barry C. Sanders, Tim Byrnes
- Abstract summary: Perceptrons, which perform binary classification, are the fundamental building blocks of neural networks.
We show that query complexity to construct an oracle for the perceptrons has a quadratic improvement over classical methods.
Our algorithm can be generalized to train more complex machine learning models such as neural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Perceptrons, which perform binary classification, are the fundamental
building blocks of neural networks. Given a data set of size~$N$ and
margin~$\gamma$ (how well the given data are separated), the query complexity
of the best-known quantum training algorithm scales as either
$(\nicefrac{\sqrt{N}}{\gamma^2})\log(\nicefrac1{\gamma^2)}$ or
$\nicefrac{N}{\sqrt{\gamma}}$, which is achieved by a hybrid of classical and
quantum search. In this paper, we improve the version space quantum training
method for perceptrons such that the query complexity of our algorithm scales
as $\sqrt{\nicefrac{N}{\gamma}}$. This is achieved by constructing an oracle
for the perceptrons using quantum counting of the number of data elements that
are correctly classified. We show that query complexity to construct such an
oracle has a quadratic improvement over classical methods. Once such an oracle
is constructed, bounded-error quantum search can be used to search over the
hyperplane instances. The optimality of our algorithm is proven by reducing the
evaluation of a two-level AND-OR tree (for which the query complexity lower
bound is known) to a multi-criterion search. Our quantum training algorithm can
be generalized to train more complex machine learning models such as neural
networks, which are built on a large number of perceptrons.
Related papers
- Non-Linear Transformations of Quantum Amplitudes: Exponential
Improvement, Generalization, and Applications [0.0]
Quantum algorithms manipulate the amplitudes of quantum states to find solutions to computational problems.
We present a framework for applying a general class of non-linear functions to the amplitudes of quantum states.
Our work provides an important and efficient building block with potentially numerous applications in areas such as optimization, state preparation, quantum chemistry, and machine learning.
arXiv Detail & Related papers (2023-09-18T14:57:21Z) - Mind the $\tilde{\mathcal{O}}$: Asymptotically Better, but Still
Impractical, Quantum Distributed Algorithms [0.0]
We present two algorithms in the Quantum CONGEST-CLIQUE model of distributed computation that succeed with high probability.
The algorithms achieve a lower round and message complexity than any known algorithms in the classical CONGEST-CLIQUE model.
An existing framework for using distributed version of Grover's search algorithm to accelerate triangle finding lies at the core of the speedup.
arXiv Detail & Related papers (2023-04-06T02:18:52Z) - Qubit-Efficient Randomized Quantum Algorithms for Linear Algebra [3.4137115855910767]
We propose a class of randomized quantum algorithms for the task of sampling from matrix functions.
Our use of qubits is purely algorithmic, and no additional qubits are required for quantum data structures.
arXiv Detail & Related papers (2023-02-03T17:22:49Z) - Quantum Clustering with k-Means: a Hybrid Approach [117.4705494502186]
We design, implement, and evaluate three hybrid quantum k-Means algorithms.
We exploit quantum phenomena to speed up the computation of distances.
We show that our hybrid quantum k-Means algorithms can be more efficient than the classical version.
arXiv Detail & Related papers (2022-12-13T16:04:16Z) - Quantum Worst-Case to Average-Case Reductions for All Linear Problems [66.65497337069792]
We study the problem of designing worst-case to average-case reductions for quantum algorithms.
We provide an explicit and efficient transformation of quantum algorithms that are only correct on a small fraction of their inputs into ones that are correct on all inputs.
arXiv Detail & Related papers (2022-12-06T22:01:49Z) - Quantum Depth in the Random Oracle Model [57.663890114335736]
We give a comprehensive characterization of the computational power of shallow quantum circuits combined with classical computation.
For some problems, the ability to perform adaptive measurements in a single shallow quantum circuit is more useful than the ability to perform many shallow quantum circuits without adaptive measurements.
arXiv Detail & Related papers (2022-10-12T17:54:02Z) - Quantum Sparse Coding [5.130440339897477]
We develop a quantum-inspired algorithm for sparse coding.
The emergence of quantum computers and Ising machines can potentially lead to more accurate estimations.
We conduct numerical experiments with simulated data on Lightr's quantum-inspired digital platform.
arXiv Detail & Related papers (2022-09-08T13:00:30Z) - A single $T$-gate makes distribution learning hard [56.045224655472865]
This work provides an extensive characterization of the learnability of the output distributions of local quantum circuits.
We show that for a wide variety of the most practically relevant learning algorithms -- including hybrid-quantum classical algorithms -- even the generative modelling problem associated with depth $d=omega(log(n))$ Clifford circuits is hard.
arXiv Detail & Related papers (2022-07-07T08:04:15Z) - Entanglement and coherence in Bernstein-Vazirani algorithm [58.720142291102135]
Bernstein-Vazirani algorithm allows one to determine a bit string encoded into an oracle.
We analyze in detail the quantum resources in the Bernstein-Vazirani algorithm.
We show that in the absence of entanglement, the performance of the algorithm is directly related to the amount of quantum coherence in the initial state.
arXiv Detail & Related papers (2022-05-26T20:32:36Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantum algorithms for learning a hidden graph and beyond [0.05076419064097732]
We study the problem of learning an unknown graph provided via an oracle using a quantum algorithm.
We give quantum algorithms that achieve speedups over the best possible classical algorithms in the OR and parity query models.
We additionally give a time-efficient quantum algorithm for this problem, based on the algorithm of Ambainis et al. for a "gapped" version of the group testing problem.
arXiv Detail & Related papers (2020-11-17T13:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.