DeepQMLP: A Scalable Quantum-Classical Hybrid DeepNeural Network
Architecture for Classification
- URL: http://arxiv.org/abs/2202.01899v1
- Date: Wed, 2 Feb 2022 15:29:46 GMT
- Title: DeepQMLP: A Scalable Quantum-Classical Hybrid DeepNeural Network
Architecture for Classification
- Authors: Mahabubul Alam, Swaroop Ghosh
- Abstract summary: Quantum machine learning (QML) is promising for potential speedups and improvements in conventional machine learning (ML) tasks.
We present a scalable quantum-classical hybrid deep neural network (DeepQMLP) architecture inspired by classical deep neural network architectures.
DeepQMLP provides up to 25.3% lower loss and 7.92% higher accuracy during inference under noise than QMLP.
- Score: 6.891238879512672
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Quantum machine learning (QML) is promising for potential speedups and
improvements in conventional machine learning (ML) tasks (e.g.,
classification/regression). The search for ideal QML models is an active
research field. This includes identification of efficient classical-to-quantum
data encoding scheme, construction of parametric quantum circuits (PQC) with
optimal expressivity and entanglement capability, and efficient output decoding
scheme to minimize the required number of measurements, to name a few. However,
most of the empirical/numerical studies lack a clear path towards scalability.
Any potential benefit observed in a simulated environment may diminish in
practical applications due to the limitations of noisy quantum hardware (e.g.,
under decoherence, gate-errors, and crosstalk). We present a scalable
quantum-classical hybrid deep neural network (DeepQMLP) architecture inspired
by classical deep neural network architectures. In DeepQMLP, stacked shallow
Quantum Neural Network (QNN) models mimic the hidden layers of a classical
feed-forward multi-layer perceptron network. Each QNN layer produces a new and
potentially rich representation of the input data for the next layer. This new
representation can be tuned by the parameters of the circuit. Shallow QNN
models experience less decoherence, gate errors, etc. which make them (and the
network) more resilient to quantum noise. We present numerical studies on a
variety of classification problems to show the trainability of DeepQMLP. We
also show that DeepQMLP performs reasonably well on unseen data and exhibits
greater resilience to noise over QNN models that use a deep quantum circuit.
DeepQMLP provided up to 25.3% lower loss and 7.92% higher accuracy during
inference under noise than QMLP.
Related papers
- Quantum-Trained Convolutional Neural Network for Deepfake Audio Detection [3.2927352068925444]
deepfake technologies pose challenges to privacy, security, and information integrity.
This paper introduces a Quantum-Trained Convolutional Neural Network framework designed to enhance the detection of deepfake audio.
arXiv Detail & Related papers (2024-10-11T20:52:10Z) - Quantum Imitation Learning [74.15588381240795]
We propose quantum imitation learning (QIL) with a hope to utilize quantum advantage to speed up IL.
We develop two QIL algorithms, quantum behavioural cloning (Q-BC) and quantum generative adversarial imitation learning (Q-GAIL)
Experiment results demonstrate that both Q-BC and Q-GAIL can achieve comparable performance compared to classical counterparts.
arXiv Detail & Related papers (2023-04-04T12:47:35Z) - Variational Quantum Neural Networks (VQNNS) in Image Classification [0.0]
This paper investigates how training of quantum neural network (QNNs) can be done using quantum optimization algorithms.
In this paper, a QNN structure is made where a variational parameterized circuit is incorporated as an input layer named as Variational Quantum Neural Network (VQNNs)
VQNNs is experimented with MNIST digit recognition (less complex) and crack image classification datasets which converge the computation in lesser time than QNN with decent training accuracy.
arXiv Detail & Related papers (2023-03-10T11:24:32Z) - Problem-Dependent Power of Quantum Neural Networks on Multi-Class
Classification [83.20479832949069]
Quantum neural networks (QNNs) have become an important tool for understanding the physical world, but their advantages and limitations are not fully understood.
Here we investigate the problem-dependent power of QCs on multi-class classification tasks.
Our work sheds light on the problem-dependent power of QNNs and offers a practical tool for evaluating their potential merit.
arXiv Detail & Related papers (2022-12-29T10:46:40Z) - Knowledge Distillation in Quantum Neural Network using Approximate
Synthesis [5.833272638548153]
We introduce the concept of knowledge distillation in Quantum Neural Network (QNN) using approximate synthesis.
We demonstrate 71.4% reduction in circuit layers, and still achieve 16.2% better accuracy under noise.
arXiv Detail & Related papers (2022-07-05T04:09:43Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - Branching Quantum Convolutional Neural Networks [0.0]
Small-scale quantum computers are already showing potential gains in learning tasks on large quantum and very large classical data sets.
We present a generalization of QCNN, the branching quantum convolutional neural network, or bQCNN, with substantially higher expressibility.
arXiv Detail & Related papers (2020-12-28T19:00:03Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Decentralizing Feature Extraction with Quantum Convolutional Neural
Network for Automatic Speech Recognition [101.69873988328808]
We build upon a quantum convolutional neural network (QCNN) composed of a quantum circuit encoder for feature extraction.
An input speech is first up-streamed to a quantum computing server to extract Mel-spectrogram.
The corresponding convolutional features are encoded using a quantum circuit algorithm with random parameters.
The encoded features are then down-streamed to the local RNN model for the final recognition.
arXiv Detail & Related papers (2020-10-26T03:36:01Z) - On the learnability of quantum neural networks [132.1981461292324]
We consider the learnability of the quantum neural network (QNN) built on the variational hybrid quantum-classical scheme.
We show that if a concept can be efficiently learned by QNN, then it can also be effectively learned by QNN even with gate noise.
arXiv Detail & Related papers (2020-07-24T06:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.