On-chip QNN: Towards Efficient On-Chip Training of Quantum Neural
Networks
- URL: http://arxiv.org/abs/2202.13239v1
- Date: Sat, 26 Feb 2022 22:27:36 GMT
- Title: On-chip QNN: Towards Efficient On-Chip Training of Quantum Neural
Networks
- Authors: Hanrui Wang and Zirui Li and Jiaqi Gu and Yongshan Ding and David Z.
Pan and Song Han
- Abstract summary: We present On-chip QNN, the first experimental demonstration of practical on-chip QNN training with parameter shift.
We propose probabilistic gradient pruning to firstly identify gradients with potentially large errors and then remove them.
The results demonstrate that our on-chip training achieves over 90% and 60% accuracy for 2-class and 4-class image classification tasks.
- Score: 21.833693982056896
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum Neural Network (QNN) is drawing increasing research interest thanks
to its potential to achieve quantum advantage on near-term Noisy Intermediate
Scale Quantum (NISQ) hardware. In order to achieve scalable QNN learning, the
training process needs to be offloaded to real quantum machines instead of
using exponential-cost classical simulators. One common approach to obtain QNN
gradients is parameter shift whose cost scales linearly with the number of
qubits. We present On-chip QNN, the first experimental demonstration of
practical on-chip QNN training with parameter shift. Nevertheless, we find that
due to the significant quantum errors (noises) on real machines, gradients
obtained from naive parameter shift have low fidelity and thus degrade the
training accuracy. To this end, we further propose probabilistic gradient
pruning to firstly identify gradients with potentially large errors and then
remove them. Specifically, small gradients have larger relative errors than
large ones, thus having a higher probability to be pruned. We perform extensive
experiments on 5 classification tasks with 5 real quantum machines. The results
demonstrate that our on-chip training achieves over 90% and 60% accuracy for
2-class and 4-class image classification tasks. The probabilistic gradient
pruning brings up to 7% QNN accuracy improvements over no pruning. Overall, we
successfully obtain similar on-chip training accuracy compared with noise-free
simulation but have much better training scalability. The code for parameter
shift on-chip training is available in the TorchQuantum library.
Related papers
- Trade-off between Gradient Measurement Efficiency and Expressivity in Deep Quantum Neural Networks [0.0]
Quantum neural networks (QNNs) require an efficient training algorithm to achieve practical quantum advantages.
General QNNs lack an efficient gradient measurement algorithm, which poses a fundamental and practical challenge to realizing scalable QNNs.
We propose a general QNN ansatz called the stabilizer-logical product ansatz (SLPA), which can reach the upper limit of the trade-off inequality.
arXiv Detail & Related papers (2024-06-26T12:59:37Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Toward Trainability of Deep Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) with random structures have poor trainability due to the exponentially vanishing gradient as the circuit depth and the qubit number increase.
We provide the first viable solution to the vanishing gradient problem for deep QNNs with theoretical guarantees.
arXiv Detail & Related papers (2021-12-30T10:27:08Z) - Can Noise on Qubits Be Learned in Quantum Neural Network? A Case Study
on QuantumFlow [25.408147000243158]
This paper aims to tackle the noise issue from another angle.
Instead of creating perfect qubits for general quantum algorithms, we investigate the potential to mitigate the noise issue for dedicate algorithms.
This paper targets quantum neural network (QNN), and proposes to learn the errors in the training phase, so that the identified QNN model can be resilient to noise.
arXiv Detail & Related papers (2021-09-08T04:43:12Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - A Statistical Framework for Low-bitwidth Training of Deep Neural
Networks [70.77754244060384]
Fully quantized training (FQT) uses low-bitwidth hardware by quantizing the activations, weights, and gradients of a neural network model.
One major challenge with FQT is the lack of theoretical understanding, in particular of how gradient quantization impacts convergence properties.
arXiv Detail & Related papers (2020-10-27T13:57:33Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - Trainability of Dissipative Perceptron-Based Quantum Neural Networks [0.8258451067861933]
We analyze the gradient scaling (and hence the trainability) for a recently proposed architecture that we called dissipative QNNs (DQNNs)
We find that DQNNs can exhibit barren plateaus, i.e., gradients that vanish exponentially in the number of qubits.
arXiv Detail & Related papers (2020-05-26T00:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.