Training Classical Neural Networks by Quantum Machine Learning
- URL: http://arxiv.org/abs/2402.16465v1
- Date: Mon, 26 Feb 2024 10:16:21 GMT
- Title: Training Classical Neural Networks by Quantum Machine Learning
- Authors: Chen-Yu Liu, En-Jui Kuo, Chu-Hsuan Abraham Lin, Sean Chen, Jason
Gemsun Young, Yeong-Jar Chang, Min-Hsiu Hsieh
- Abstract summary: This work proposes a training scheme for classical neural networks (NNs) that utilizes the exponentially large Hilbert space of a quantum system.
Unlike existing quantum machine learning (QML) methods, the results obtained from quantum computers using our approach can be directly used on classical computers.
- Score: 9.002305736350833
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, advanced deep neural networks have required a large number
of parameters for training. Therefore, finding a method to reduce the number of
parameters has become crucial for achieving efficient training. This work
proposes a training scheme for classical neural networks (NNs) that utilizes
the exponentially large Hilbert space of a quantum system. By mapping a
classical NN with $M$ parameters to a quantum neural network (QNN) with
$O(\text{polylog} (M))$ rotational gate angles, we can significantly reduce the
number of parameters. These gate angles can be updated to train the classical
NN. Unlike existing quantum machine learning (QML) methods, the results
obtained from quantum computers using our approach can be directly used on
classical computers. Numerical results on the MNIST and Iris datasets are
presented to demonstrate the effectiveness of our approach. Additionally, we
investigate the effects of deeper QNNs and the number of measurement shots for
the QNN, followed by the theoretical perspective of the proposed method. This
work opens a new branch of QML and offers a practical tool that can greatly
enhance the influence of QML, as the trained QML results can benefit classical
computing in our daily lives.
Related papers
- Bridging Classical and Quantum Machine Learning: Knowledge Transfer From
Classical to Quantum Neural Networks Using Knowledge Distillation [0.0]
This paper introduces a new method to transfer knowledge from classical to quantum neural networks using knowledge distillation.
We adapt classical convolutional neural network (CNN) architectures like LeNet and AlexNet to serve as teacher networks.
Quantum models achieve an average accuracy improvement of 0.80% on the MNIST dataset and 5.40% on the more complex Fashion MNIST dataset.
arXiv Detail & Related papers (2023-11-23T05:06:43Z) - ShadowNet for Data-Centric Quantum System Learning [188.683909185536]
We propose a data-centric learning paradigm combining the strength of neural-network protocols and classical shadows.
Capitalizing on the generalization power of neural networks, this paradigm can be trained offline and excel at predicting previously unseen systems.
We present the instantiation of our paradigm in quantum state tomography and direct fidelity estimation tasks and conduct numerical analysis up to 60 qubits.
arXiv Detail & Related papers (2023-08-22T09:11:53Z) - Variational Quantum Neural Networks (VQNNS) in Image Classification [0.0]
This paper investigates how training of quantum neural network (QNNs) can be done using quantum optimization algorithms.
In this paper, a QNN structure is made where a variational parameterized circuit is incorporated as an input layer named as Variational Quantum Neural Network (VQNNs)
VQNNs is experimented with MNIST digit recognition (less complex) and crack image classification datasets which converge the computation in lesser time than QNN with decent training accuracy.
arXiv Detail & Related papers (2023-03-10T11:24:32Z) - Quantum Ridgelet Transform: Winning Lottery Ticket of Neural Networks
with Quantum Computation [8.947825738917869]
Ridgelet transform has been a fundamental mathematical tool in the theoretical studies of neural networks.
We develop a quantum ridgelet transform (QRT) which implements the ridgelet transform of a quantum state within a linear runtime $exp(O(D))$ of quantum computation.
As an application, we show that one can use QRT as a fundamental subroutine for QML to efficiently find a sparse trainable subnetwork of large shallow wide neural networks.
arXiv Detail & Related papers (2023-01-27T19:00:00Z) - Towards Neural Variational Monte Carlo That Scales Linearly with System
Size [67.09349921751341]
Quantum many-body problems are central to demystifying some exotic quantum phenomena, e.g., high-temperature superconductors.
The combination of neural networks (NN) for representing quantum states, and the Variational Monte Carlo (VMC) algorithm, has been shown to be a promising method for solving such problems.
We propose a NN architecture called Vector-Quantized Neural Quantum States (VQ-NQS) that utilizes vector-quantization techniques to leverage redundancies in the local-energy calculations of the VMC algorithm.
arXiv Detail & Related papers (2022-12-21T19:00:04Z) - Quantum-inspired Complex Convolutional Neural Networks [17.65730040410185]
We improve the quantum-inspired neurons by exploiting the complex-valued weights which have richer representational capacity and better non-linearity.
We draw the models of quantum-inspired convolutional neural networks (QICNNs) capable of processing high-dimensional data.
The performance of classification accuracy of the five QICNNs are tested on the MNIST and CIFAR-10 datasets.
arXiv Detail & Related papers (2021-10-31T03:10:48Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - The dilemma of quantum neural networks [63.82713636522488]
We show that quantum neural networks (QNNs) fail to provide any benefit over classical learning models.
QNNs suffer from the severely limited effective model capacity, which incurs poor generalization on real-world datasets.
These results force us to rethink the role of current QNNs and to design novel protocols for solving real-world problems with quantum advantages.
arXiv Detail & Related papers (2021-06-09T10:41:47Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Recurrent Quantum Neural Networks [7.6146285961466]
Recurrent neural networks are the foundation of many sequence-to-sequence models in machine learning.
We construct a quantum recurrent neural network (QRNN) with demonstrable performance on non-trivial tasks.
We evaluate the QRNN on MNIST classification, both by feeding the QRNN each image pixel-by-pixel; and by utilising modern data augmentation as preprocessing step.
arXiv Detail & Related papers (2020-06-25T17:59:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.