Knowledge Distillation in Quantum Neural Network using Approximate
Synthesis
- URL: http://arxiv.org/abs/2207.01801v1
- Date: Tue, 5 Jul 2022 04:09:43 GMT
- Title: Knowledge Distillation in Quantum Neural Network using Approximate
Synthesis
- Authors: Mahabubul Alam, Satwik Kundu, Swaroop Ghosh
- Abstract summary: We introduce the concept of knowledge distillation in Quantum Neural Network (QNN) using approximate synthesis.
We demonstrate 71.4% reduction in circuit layers, and still achieve 16.2% better accuracy under noise.
- Score: 5.833272638548153
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent assertions of a potential advantage of Quantum Neural Network (QNN)
for specific Machine Learning (ML) tasks have sparked the curiosity of a
sizable number of application researchers. The parameterized quantum circuit
(PQC), a major building block of a QNN, consists of several layers of
single-qubit rotations and multi-qubit entanglement operations. The optimum
number of PQC layers for a particular ML task is generally unknown. A larger
network often provides better performance in noiseless simulations. However, it
may perform poorly on hardware compared to a shallower network. Because the
amount of noise varies amongst quantum devices, the optimal depth of PQC can
vary significantly. Additionally, the gates chosen for the PQC may be suitable
for one type of hardware but not for another due to compilation overhead. This
makes it difficult to generalize a QNN design to wide range of hardware and
noise levels. An alternate approach is to build and train multiple QNN models
targeted for each hardware which can be expensive. To circumvent these issues,
we introduce the concept of knowledge distillation in QNN using approximate
synthesis. The proposed approach will create a new QNN network with (i) a
reduced number of layers or (ii) a different gate set without having to train
it from scratch. Training the new network for a few epochs can compensate for
the loss caused by approximation error. Through empirical analysis, we
demonstrate ~71.4% reduction in circuit layers, and still achieve ~16.2% better
accuracy under noise.
Related papers
- Variational Quantum Neural Networks (VQNNS) in Image Classification [0.0]
This paper investigates how training of quantum neural network (QNNs) can be done using quantum optimization algorithms.
In this paper, a QNN structure is made where a variational parameterized circuit is incorporated as an input layer named as Variational Quantum Neural Network (VQNNs)
VQNNs is experimented with MNIST digit recognition (less complex) and crack image classification datasets which converge the computation in lesser time than QNN with decent training accuracy.
arXiv Detail & Related papers (2023-03-10T11:24:32Z) - Optimizing Tensor Network Contraction Using Reinforcement Learning [86.05566365115729]
We propose a Reinforcement Learning (RL) approach combined with Graph Neural Networks (GNN) to address the contraction ordering problem.
The problem is extremely challenging due to the huge search space, the heavy-tailed reward distribution, and the challenging credit assignment.
We show how a carefully implemented RL-agent that uses a GNN as the basic policy construct can address these challenges.
arXiv Detail & Related papers (2022-04-18T21:45:13Z) - DeepQMLP: A Scalable Quantum-Classical Hybrid DeepNeural Network
Architecture for Classification [6.891238879512672]
Quantum machine learning (QML) is promising for potential speedups and improvements in conventional machine learning (ML) tasks.
We present a scalable quantum-classical hybrid deep neural network (DeepQMLP) architecture inspired by classical deep neural network architectures.
DeepQMLP provides up to 25.3% lower loss and 7.92% higher accuracy during inference under noise than QMLP.
arXiv Detail & Related papers (2022-02-02T15:29:46Z) - Toward Trainability of Deep Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) with random structures have poor trainability due to the exponentially vanishing gradient as the circuit depth and the qubit number increase.
We provide the first viable solution to the vanishing gradient problem for deep QNNs with theoretical guarantees.
arXiv Detail & Related papers (2021-12-30T10:27:08Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - A White Paper on Neural Network Quantization [20.542729144379223]
We introduce state-of-the-art algorithms for mitigating the impact of quantization noise on the network's performance.
We consider two main classes of algorithms: Post-Training Quantization (PTQ) and Quantization-Aware-Training (QAT)
arXiv Detail & Related papers (2021-06-15T17:12:42Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - Trainability of Dissipative Perceptron-Based Quantum Neural Networks [0.8258451067861933]
We analyze the gradient scaling (and hence the trainability) for a recently proposed architecture that we called dissipative QNNs (DQNNs)
We find that DQNNs can exhibit barren plateaus, i.e., gradients that vanish exponentially in the number of qubits.
arXiv Detail & Related papers (2020-05-26T00:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.