On-chip QNN: Towards Efficient On-Chip Training of Quantum Neural
Networks
- URL: http://arxiv.org/abs/2202.13239v1
- Date: Sat, 26 Feb 2022 22:27:36 GMT
- Title: On-chip QNN: Towards Efficient On-Chip Training of Quantum Neural
Networks
- Authors: Hanrui Wang and Zirui Li and Jiaqi Gu and Yongshan Ding and David Z.
Pan and Song Han
- Abstract summary: We present On-chip QNN, the first experimental demonstration of practical on-chip QNN training with parameter shift.
We propose probabilistic gradient pruning to firstly identify gradients with potentially large errors and then remove them.
The results demonstrate that our on-chip training achieves over 90% and 60% accuracy for 2-class and 4-class image classification tasks.
- Score: 21.833693982056896
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantum Neural Network (QNN) is drawing increasing research interest thanks
to its potential to achieve quantum advantage on near-term Noisy Intermediate
Scale Quantum (NISQ) hardware. In order to achieve scalable QNN learning, the
training process needs to be offloaded to real quantum machines instead of
using exponential-cost classical simulators. One common approach to obtain QNN
gradients is parameter shift whose cost scales linearly with the number of
qubits. We present On-chip QNN, the first experimental demonstration of
practical on-chip QNN training with parameter shift. Nevertheless, we find that
due to the significant quantum errors (noises) on real machines, gradients
obtained from naive parameter shift have low fidelity and thus degrade the
training accuracy. To this end, we further propose probabilistic gradient
pruning to firstly identify gradients with potentially large errors and then
remove them. Specifically, small gradients have larger relative errors than
large ones, thus having a higher probability to be pruned. We perform extensive
experiments on 5 classification tasks with 5 real quantum machines. The results
demonstrate that our on-chip training achieves over 90% and 60% accuracy for
2-class and 4-class image classification tasks. The probabilistic gradient
pruning brings up to 7% QNN accuracy improvements over no pruning. Overall, we
successfully obtain similar on-chip training accuracy compared with noise-free
simulation but have much better training scalability. The code for parameter
shift on-chip training is available in the TorchQuantum library.
Related papers
- Extending Quantum Perceptrons: Rydberg Devices, Multi-Class Classification, and Error Tolerance [67.77677387243135]
Quantum Neuromorphic Computing (QNC) merges quantum computation with neural computation to create scalable, noise-resilient algorithms for quantum machine learning (QML)
At the core of QNC is the quantum perceptron (QP), which leverages the analog dynamics of interacting qubits to enable universal quantum computation.
arXiv Detail & Related papers (2024-11-13T23:56:20Z) - Quantum Deep Equilibrium Models [1.5853439776721878]
We present Quantum Deep Equilibrium Models (QDEQ), a training paradigm that learns parameters of a quantum machine learning model.
We find that QDEQ is not only competitive with comparable existing baseline models, but also achieves higher performance than a network with 5 times more layers.
This demonstrates that the QDEQ paradigm can be used to develop significantly more shallow quantum circuits for a given task.
arXiv Detail & Related papers (2024-10-31T13:54:37Z) - QuantumSEA: In-Time Sparse Exploration for Noise Adaptive Quantum
Circuits [82.50620782471485]
QuantumSEA is an in-time sparse exploration for noise-adaptive quantum circuits.
It aims to achieve two key objectives: (1) implicit circuits capacity during training and (2) noise robustness.
Our method establishes state-of-the-art results with only half the number of quantum gates and 2x time saving of circuit executions.
arXiv Detail & Related papers (2024-01-10T22:33:00Z) - Improving Parameter Training for VQEs by Sequential Hamiltonian Assembly [4.646930308096446]
A central challenge in quantum machine learning is the design and training of parameterized quantum circuits (PQCs)
We propose a Sequential Hamiltonian Assembly, which iteratively approximates the loss function using local components.
Our approach outperforms conventional parameter training by 29.99% and the empirical state of the art, Layerwise Learning, by 5.12% in the mean accuracy.
arXiv Detail & Related papers (2023-12-09T11:47:32Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Improving Convergence for Quantum Variational Classifiers using Weight
Re-Mapping [60.086820254217336]
In recent years, quantum machine learning has seen a substantial increase in the use of variational quantum circuits (VQCs)
We introduce weight re-mapping for VQCs, to unambiguously map the weights to an interval of length $2pi$.
We demonstrate that weight re-mapping increased test accuracy for the Wine dataset by $10%$ over using unmodified weights.
arXiv Detail & Related papers (2022-12-22T13:23:19Z) - Mode connectivity in the loss landscape of parameterized quantum
circuits [1.7546369508217283]
Variational training of parameterized quantum circuits (PQCs) underpins many employed on near-term noisy intermediate scale quantum (NISQ) devices.
We adapt the qualitative loss landscape characterization for neural networks introduced in citegoodfellowqualitatively,li 2017visualizing and tests for connectivity used in citedraxler 2018essentially to study the loss landscape features in PQC training.
arXiv Detail & Related papers (2021-11-09T18:28:46Z) - QuantumNAS: Noise-Adaptive Search for Robust Quantum Circuits [26.130594925642143]
Quantum noise is the key challenge in Noisy Intermediate-Scale Quantum (NISQ) computers.
We propose and experimentally implement QuantumNAS, the first comprehensive framework for noise-adaptive co-search of variational circuit and qubit mapping.
For QML tasks, QuantumNAS is the first to demonstrate over 95% 2-class, 85% 4-class, and 32% 10-class classification accuracy on real quantum computers.
arXiv Detail & Related papers (2021-07-22T17:58:13Z) - Optimal training of variational quantum algorithms without barren
plateaus [0.0]
Variational quantum algorithms (VQAs) promise efficient use of near-term quantum computers.
We show how to optimally train a VQA for learning quantum states.
We propose the application of Gaussian kernels for quantum machine learning.
arXiv Detail & Related papers (2021-04-29T17:54:59Z) - A Statistical Framework for Low-bitwidth Training of Deep Neural
Networks [70.77754244060384]
Fully quantized training (FQT) uses low-bitwidth hardware by quantizing the activations, weights, and gradients of a neural network model.
One major challenge with FQT is the lack of theoretical understanding, in particular of how gradient quantization impacts convergence properties.
arXiv Detail & Related papers (2020-10-27T13:57:33Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.