Learning To Optimize Quantum Neural Network Without Gradients
- URL: http://arxiv.org/abs/2304.07442v1
- Date: Sat, 15 Apr 2023 01:09:12 GMT
- Title: Learning To Optimize Quantum Neural Network Without Gradients
- Authors: Ankit Kulshrestha, Xiaoyuan Liu, Hayato Ushijima-Mwesigwa, Ilya Safro
- Abstract summary: We introduce a novel meta-optimization algorithm that trains a emphmeta-optimizer network to output parameters for the quantum circuit.
We show that we achieve a better quality minima in fewer circuit evaluations than existing gradient based algorithms on different datasets.
- Score: 3.9848482919377006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantum Machine Learning is an emerging sub-field in machine learning where
one of the goals is to perform pattern recognition tasks by encoding data into
quantum states. This extension from classical to quantum domain has been made
possible due to the development of hybrid quantum-classical algorithms that
allow a parameterized quantum circuit to be optimized using gradient based
algorithms that run on a classical computer. The similarities in training of
these hybrid algorithms and classical neural networks has further led to the
development of Quantum Neural Networks (QNNs). However, in the current training
regime for QNNs, the gradients w.r.t objective function have to be computed on
the quantum device. This computation is highly non-scalable and is affected by
hardware and sampling noise present in the current generation of quantum
hardware. In this paper, we propose a training algorithm that does not rely on
gradient information. Specifically, we introduce a novel meta-optimization
algorithm that trains a \emph{meta-optimizer} network to output parameters for
the quantum circuit such that the objective function is minimized. We
empirically and theoretically show that we achieve a better quality minima in
fewer circuit evaluations than existing gradient based algorithms on different
datasets.
Related papers
- Efficient Learning for Linear Properties of Bounded-Gate Quantum Circuits [63.733312560668274]
Given a quantum circuit containing d tunable RZ gates and G-d Clifford gates, can a learner perform purely classical inference to efficiently predict its linear properties?
We prove that the sample complexity scaling linearly in d is necessary and sufficient to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d.
We devise a kernel-based learning model capable of trading off prediction error and computational complexity, transitioning from exponential to scaling in many practical settings.
arXiv Detail & Related papers (2024-08-22T08:21:28Z) - Quantum Subroutine for Variance Estimation: Algorithmic Design and Applications [80.04533958880862]
Quantum computing sets the foundation for new ways of designing algorithms.
New challenges arise concerning which field quantum speedup can be achieved.
Looking for the design of quantum subroutines that are more efficient than their classical counterpart poses solid pillars to new powerful quantum algorithms.
arXiv Detail & Related papers (2024-02-26T09:32:07Z) - A Quantum-Classical Collaborative Training Architecture Based on Quantum
State Fidelity [50.387179833629254]
We introduce a collaborative classical-quantum architecture called co-TenQu.
Co-TenQu enhances a classical deep neural network by up to 41.72% in a fair setting.
It outperforms other quantum-based methods by up to 1.9 times and achieves similar accuracy while utilizing 70.59% fewer qubits.
arXiv Detail & Related papers (2024-02-23T14:09:41Z) - Stabilization and Dissipative Information Transfer of a Superconducting
Kerr-Cat Qubit [0.0]
We study the dissipative information transfer to a qubit model called Cat-Qubit.
This model is especially important for the dissipative-based version of the binary quantum classification.
Cat-Qubit architecture has the potential to easily implement activation-like functions in artificial neural networks.
arXiv Detail & Related papers (2023-07-23T11:28:52Z) - Variational Quantum Neural Networks (VQNNS) in Image Classification [0.0]
This paper investigates how training of quantum neural network (QNNs) can be done using quantum optimization algorithms.
In this paper, a QNN structure is made where a variational parameterized circuit is incorporated as an input layer named as Variational Quantum Neural Network (VQNNs)
VQNNs is experimented with MNIST digit recognition (less complex) and crack image classification datasets which converge the computation in lesser time than QNN with decent training accuracy.
arXiv Detail & Related papers (2023-03-10T11:24:32Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - New quantum neural network designs [0.0]
We investigate the performance of new quantum neural network designs.
We develop a new technique, where we merge feature map and variational circuit into a single parameterized circuit.
We achieve lower loss, better accuracy, and faster convergence.
arXiv Detail & Related papers (2022-03-12T10:20:14Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Quantum Machine Learning for Particle Physics using a Variational
Quantum Classifier [0.0]
We propose a novel hybrid variational quantum classifier that combines the quantum gradient descent method with steepest gradient descent to optimise the parameters of the network.
We find that this algorithm has a better learning outcome than a classical neural network or a quantum machine learning method trained with a non-quantum optimisation method.
arXiv Detail & Related papers (2020-10-14T18:05:49Z) - QEML (Quantum Enhanced Machine Learning): Using Quantum Computing to
Enhance ML Classifiers and Feature Spaces [0.49841205356595936]
Machine learning and quantum computing are causing a paradigm shift in the performance and behavior of certain algorithms.
This paper first understands the mathematical intuition for the implementation of quantum feature space.
We build a noisy variational quantum circuit KNN which mimics the classification methods of a traditional KNN.
arXiv Detail & Related papers (2020-02-22T04:14:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.